Test Report: KVM_Linux_crio 19985

                    
                      22dd179cd6f75db6f60fbf5ee015cd1b680b4179:2024-12-04:37341
                    
                

Test fail (32/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.87
38 TestAddons/parallel/MetricsServer 332.43
47 TestAddons/StoppedEnableDisable 154.44
166 TestMultiControlPlane/serial/StopSecondaryNode 141.33
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.62
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.24
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.32
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 414.26
173 TestMultiControlPlane/serial/StopCluster 141.84
233 TestMultiNode/serial/RestartKeepsNodes 332.11
235 TestMultiNode/serial/StopMultiNode 144.99
242 TestPreload 168.51
250 TestKubernetesUpgrade 388.16
286 TestPause/serial/SecondStartNoReconfiguration 421.68
321 TestStartStop/group/old-k8s-version/serial/FirstStart 273.9
342 TestStartStop/group/no-preload/serial/Stop 139.04
344 TestStartStop/group/embed-certs/serial/Stop 139.01
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.94
348 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 86.14
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 764.25
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.12
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.26
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.93
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.27
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 439.17
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 443.11
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 305.21
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 129.7
x
+
TestAddons/parallel/Ingress (153.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-153447 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-153447 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-153447 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [13f1323b-f52e-49ea-b039-e6312cb1e3a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [13f1323b-f52e-49ea-b039-e6312cb1e3a8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004064658s
I1204 19:56:37.237321   17743 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-153447 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.160753293s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-153447 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-153447 -n addons-153447
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 logs -n 25: (1.192965899s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| delete  | -p download-only-079944                                                                     | download-only-079944 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| delete  | -p download-only-833018                                                                     | download-only-833018 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| delete  | -p download-only-079944                                                                     | download-only-079944 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-214166 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | binary-mirror-214166                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43213                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-214166                                                                     | binary-mirror-214166 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | addons-153447                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | addons-153447                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-153447 --wait=true                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:54 UTC | 04 Dec 24 19:54 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-153447 ssh cat                                                                       | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | /opt/local-path-provisioner/pvc-753cdf45-d6df-4271-9413-533dc1761312_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-153447 ip                                                                            | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | -p addons-153447                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-153447 ssh curl -s                                                                   | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-153447 ip                                                                            | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:58 UTC | 04 Dec 24 19:58 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 19:52:46
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 19:52:46.271292   18382 out.go:345] Setting OutFile to fd 1 ...
	I1204 19:52:46.271438   18382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:46.271448   18382 out.go:358] Setting ErrFile to fd 2...
	I1204 19:52:46.271453   18382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:46.271635   18382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 19:52:46.272228   18382 out.go:352] Setting JSON to false
	I1204 19:52:46.273037   18382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2116,"bootTime":1733339850,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 19:52:46.273139   18382 start.go:139] virtualization: kvm guest
	I1204 19:52:46.275218   18382 out.go:177] * [addons-153447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 19:52:46.276477   18382 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 19:52:46.276483   18382 notify.go:220] Checking for updates...
	I1204 19:52:46.277641   18382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 19:52:46.278788   18382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:52:46.279951   18382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:46.281121   18382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 19:52:46.282202   18382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 19:52:46.283537   18382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 19:52:46.316111   18382 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 19:52:46.317187   18382 start.go:297] selected driver: kvm2
	I1204 19:52:46.317199   18382 start.go:901] validating driver "kvm2" against <nil>
	I1204 19:52:46.317209   18382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 19:52:46.317876   18382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:46.317947   18382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 19:52:46.332219   18382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 19:52:46.332270   18382 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 19:52:46.332545   18382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 19:52:46.332575   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:52:46.332612   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:52:46.332620   18382 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 19:52:46.332662   18382 start.go:340] cluster config:
	{Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 19:52:46.332753   18382 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:46.334386   18382 out.go:177] * Starting "addons-153447" primary control-plane node in "addons-153447" cluster
	I1204 19:52:46.335735   18382 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 19:52:46.335771   18382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 19:52:46.335780   18382 cache.go:56] Caching tarball of preloaded images
	I1204 19:52:46.335849   18382 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 19:52:46.335859   18382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 19:52:46.336145   18382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json ...
	I1204 19:52:46.336164   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json: {Name:mk74fe767c26e98e973ca64c19eab9a9a25d2dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:52:46.336275   18382 start.go:360] acquireMachinesLock for addons-153447: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 19:52:46.336317   18382 start.go:364] duration metric: took 30.06µs to acquireMachinesLock for "addons-153447"
	I1204 19:52:46.336334   18382 start.go:93] Provisioning new machine with config: &{Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 19:52:46.336383   18382 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 19:52:46.338364   18382 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1204 19:52:46.338505   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:52:46.338546   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:52:46.352238   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I1204 19:52:46.352736   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:52:46.353273   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:52:46.353294   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:52:46.353664   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:52:46.353860   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:52:46.354017   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:52:46.354151   18382 start.go:159] libmachine.API.Create for "addons-153447" (driver="kvm2")
	I1204 19:52:46.354222   18382 client.go:168] LocalClient.Create starting
	I1204 19:52:46.354258   18382 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 19:52:46.466800   18382 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 19:52:46.734150   18382 main.go:141] libmachine: Running pre-create checks...
	I1204 19:52:46.734181   18382 main.go:141] libmachine: (addons-153447) Calling .PreCreateCheck
	I1204 19:52:46.734684   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:52:46.735098   18382 main.go:141] libmachine: Creating machine...
	I1204 19:52:46.735113   18382 main.go:141] libmachine: (addons-153447) Calling .Create
	I1204 19:52:46.735310   18382 main.go:141] libmachine: (addons-153447) Creating KVM machine...
	I1204 19:52:46.736450   18382 main.go:141] libmachine: (addons-153447) DBG | found existing default KVM network
	I1204 19:52:46.737145   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:46.737011   18404 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1204 19:52:46.737200   18382 main.go:141] libmachine: (addons-153447) DBG | created network xml: 
	I1204 19:52:46.737219   18382 main.go:141] libmachine: (addons-153447) DBG | <network>
	I1204 19:52:46.737230   18382 main.go:141] libmachine: (addons-153447) DBG |   <name>mk-addons-153447</name>
	I1204 19:52:46.737248   18382 main.go:141] libmachine: (addons-153447) DBG |   <dns enable='no'/>
	I1204 19:52:46.737260   18382 main.go:141] libmachine: (addons-153447) DBG |   
	I1204 19:52:46.737273   18382 main.go:141] libmachine: (addons-153447) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 19:52:46.737286   18382 main.go:141] libmachine: (addons-153447) DBG |     <dhcp>
	I1204 19:52:46.737298   18382 main.go:141] libmachine: (addons-153447) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 19:52:46.737309   18382 main.go:141] libmachine: (addons-153447) DBG |     </dhcp>
	I1204 19:52:46.737319   18382 main.go:141] libmachine: (addons-153447) DBG |   </ip>
	I1204 19:52:46.737328   18382 main.go:141] libmachine: (addons-153447) DBG |   
	I1204 19:52:46.737339   18382 main.go:141] libmachine: (addons-153447) DBG | </network>
	I1204 19:52:46.737352   18382 main.go:141] libmachine: (addons-153447) DBG | 
	I1204 19:52:46.742677   18382 main.go:141] libmachine: (addons-153447) DBG | trying to create private KVM network mk-addons-153447 192.168.39.0/24...
	I1204 19:52:46.805775   18382 main.go:141] libmachine: (addons-153447) DBG | private KVM network mk-addons-153447 192.168.39.0/24 created
	I1204 19:52:46.805812   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:46.805758   18404 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:46.805838   18382 main.go:141] libmachine: (addons-153447) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 ...
	I1204 19:52:46.805858   18382 main.go:141] libmachine: (addons-153447) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 19:52:46.805882   18382 main.go:141] libmachine: (addons-153447) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 19:52:47.068964   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.068807   18404 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa...
	I1204 19:52:47.265987   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.265811   18404 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/addons-153447.rawdisk...
	I1204 19:52:47.266025   18382 main.go:141] libmachine: (addons-153447) DBG | Writing magic tar header
	I1204 19:52:47.266045   18382 main.go:141] libmachine: (addons-153447) DBG | Writing SSH key tar header
	I1204 19:52:47.266056   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.265968   18404 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 ...
	I1204 19:52:47.266105   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447
	I1204 19:52:47.266130   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 19:52:47.266144   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 (perms=drwx------)
	I1204 19:52:47.266164   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 19:52:47.266174   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 19:52:47.266185   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 19:52:47.266213   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 19:52:47.266238   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 19:52:47.266249   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:47.266260   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 19:52:47.266270   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 19:52:47.266280   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins
	I1204 19:52:47.266289   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home
	I1204 19:52:47.266296   18382 main.go:141] libmachine: (addons-153447) DBG | Skipping /home - not owner
	I1204 19:52:47.266306   18382 main.go:141] libmachine: (addons-153447) Creating domain...
	I1204 19:52:47.267207   18382 main.go:141] libmachine: (addons-153447) define libvirt domain using xml: 
	I1204 19:52:47.267240   18382 main.go:141] libmachine: (addons-153447) <domain type='kvm'>
	I1204 19:52:47.267251   18382 main.go:141] libmachine: (addons-153447)   <name>addons-153447</name>
	I1204 19:52:47.267267   18382 main.go:141] libmachine: (addons-153447)   <memory unit='MiB'>4000</memory>
	I1204 19:52:47.267276   18382 main.go:141] libmachine: (addons-153447)   <vcpu>2</vcpu>
	I1204 19:52:47.267285   18382 main.go:141] libmachine: (addons-153447)   <features>
	I1204 19:52:47.267294   18382 main.go:141] libmachine: (addons-153447)     <acpi/>
	I1204 19:52:47.267303   18382 main.go:141] libmachine: (addons-153447)     <apic/>
	I1204 19:52:47.267311   18382 main.go:141] libmachine: (addons-153447)     <pae/>
	I1204 19:52:47.267316   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267321   18382 main.go:141] libmachine: (addons-153447)   </features>
	I1204 19:52:47.267326   18382 main.go:141] libmachine: (addons-153447)   <cpu mode='host-passthrough'>
	I1204 19:52:47.267333   18382 main.go:141] libmachine: (addons-153447)   
	I1204 19:52:47.267339   18382 main.go:141] libmachine: (addons-153447)   </cpu>
	I1204 19:52:47.267346   18382 main.go:141] libmachine: (addons-153447)   <os>
	I1204 19:52:47.267351   18382 main.go:141] libmachine: (addons-153447)     <type>hvm</type>
	I1204 19:52:47.267396   18382 main.go:141] libmachine: (addons-153447)     <boot dev='cdrom'/>
	I1204 19:52:47.267420   18382 main.go:141] libmachine: (addons-153447)     <boot dev='hd'/>
	I1204 19:52:47.267430   18382 main.go:141] libmachine: (addons-153447)     <bootmenu enable='no'/>
	I1204 19:52:47.267439   18382 main.go:141] libmachine: (addons-153447)   </os>
	I1204 19:52:47.267447   18382 main.go:141] libmachine: (addons-153447)   <devices>
	I1204 19:52:47.267456   18382 main.go:141] libmachine: (addons-153447)     <disk type='file' device='cdrom'>
	I1204 19:52:47.267470   18382 main.go:141] libmachine: (addons-153447)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/boot2docker.iso'/>
	I1204 19:52:47.267482   18382 main.go:141] libmachine: (addons-153447)       <target dev='hdc' bus='scsi'/>
	I1204 19:52:47.267545   18382 main.go:141] libmachine: (addons-153447)       <readonly/>
	I1204 19:52:47.267569   18382 main.go:141] libmachine: (addons-153447)     </disk>
	I1204 19:52:47.267576   18382 main.go:141] libmachine: (addons-153447)     <disk type='file' device='disk'>
	I1204 19:52:47.267583   18382 main.go:141] libmachine: (addons-153447)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 19:52:47.267593   18382 main.go:141] libmachine: (addons-153447)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/addons-153447.rawdisk'/>
	I1204 19:52:47.267600   18382 main.go:141] libmachine: (addons-153447)       <target dev='hda' bus='virtio'/>
	I1204 19:52:47.267606   18382 main.go:141] libmachine: (addons-153447)     </disk>
	I1204 19:52:47.267613   18382 main.go:141] libmachine: (addons-153447)     <interface type='network'>
	I1204 19:52:47.267619   18382 main.go:141] libmachine: (addons-153447)       <source network='mk-addons-153447'/>
	I1204 19:52:47.267625   18382 main.go:141] libmachine: (addons-153447)       <model type='virtio'/>
	I1204 19:52:47.267630   18382 main.go:141] libmachine: (addons-153447)     </interface>
	I1204 19:52:47.267635   18382 main.go:141] libmachine: (addons-153447)     <interface type='network'>
	I1204 19:52:47.267641   18382 main.go:141] libmachine: (addons-153447)       <source network='default'/>
	I1204 19:52:47.267647   18382 main.go:141] libmachine: (addons-153447)       <model type='virtio'/>
	I1204 19:52:47.267660   18382 main.go:141] libmachine: (addons-153447)     </interface>
	I1204 19:52:47.267672   18382 main.go:141] libmachine: (addons-153447)     <serial type='pty'>
	I1204 19:52:47.267681   18382 main.go:141] libmachine: (addons-153447)       <target port='0'/>
	I1204 19:52:47.267690   18382 main.go:141] libmachine: (addons-153447)     </serial>
	I1204 19:52:47.267699   18382 main.go:141] libmachine: (addons-153447)     <console type='pty'>
	I1204 19:52:47.267714   18382 main.go:141] libmachine: (addons-153447)       <target type='serial' port='0'/>
	I1204 19:52:47.267727   18382 main.go:141] libmachine: (addons-153447)     </console>
	I1204 19:52:47.267738   18382 main.go:141] libmachine: (addons-153447)     <rng model='virtio'>
	I1204 19:52:47.267751   18382 main.go:141] libmachine: (addons-153447)       <backend model='random'>/dev/random</backend>
	I1204 19:52:47.267762   18382 main.go:141] libmachine: (addons-153447)     </rng>
	I1204 19:52:47.267773   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267778   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267786   18382 main.go:141] libmachine: (addons-153447)   </devices>
	I1204 19:52:47.267796   18382 main.go:141] libmachine: (addons-153447) </domain>
	I1204 19:52:47.267802   18382 main.go:141] libmachine: (addons-153447) 
	I1204 19:52:47.273713   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:67:c5:84 in network default
	I1204 19:52:47.274172   18382 main.go:141] libmachine: (addons-153447) Ensuring networks are active...
	I1204 19:52:47.274194   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:47.274801   18382 main.go:141] libmachine: (addons-153447) Ensuring network default is active
	I1204 19:52:47.275151   18382 main.go:141] libmachine: (addons-153447) Ensuring network mk-addons-153447 is active
	I1204 19:52:47.275788   18382 main.go:141] libmachine: (addons-153447) Getting domain xml...
	I1204 19:52:47.276511   18382 main.go:141] libmachine: (addons-153447) Creating domain...
	I1204 19:52:48.677064   18382 main.go:141] libmachine: (addons-153447) Waiting to get IP...
	I1204 19:52:48.677954   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:48.678355   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:48.678382   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:48.678316   18404 retry.go:31] will retry after 220.610561ms: waiting for machine to come up
	I1204 19:52:48.900700   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:48.901137   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:48.901158   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:48.901108   18404 retry.go:31] will retry after 253.032712ms: waiting for machine to come up
	I1204 19:52:49.155327   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.155667   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.155707   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.155644   18404 retry.go:31] will retry after 305.740588ms: waiting for machine to come up
	I1204 19:52:49.463331   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.463877   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.463898   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.463842   18404 retry.go:31] will retry after 387.143331ms: waiting for machine to come up
	I1204 19:52:49.852222   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.852653   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.852684   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.852592   18404 retry.go:31] will retry after 582.426176ms: waiting for machine to come up
	I1204 19:52:50.436277   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:50.436736   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:50.436768   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:50.436676   18404 retry.go:31] will retry after 748.274759ms: waiting for machine to come up
	I1204 19:52:51.186077   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:51.186537   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:51.186575   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:51.186499   18404 retry.go:31] will retry after 956.999473ms: waiting for machine to come up
	I1204 19:52:52.145482   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:52.145876   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:52.145911   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:52.145818   18404 retry.go:31] will retry after 1.355766127s: waiting for machine to come up
	I1204 19:52:53.502894   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:53.503400   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:53.503427   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:53.503344   18404 retry.go:31] will retry after 1.611102605s: waiting for machine to come up
	I1204 19:52:55.117027   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:55.117459   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:55.117483   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:55.117409   18404 retry.go:31] will retry after 2.220438115s: waiting for machine to come up
	I1204 19:52:57.339784   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:57.340272   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:57.340305   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:57.340212   18404 retry.go:31] will retry after 2.81848192s: waiting for machine to come up
	I1204 19:53:00.159900   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:00.160301   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:53:00.160330   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:53:00.160279   18404 retry.go:31] will retry after 3.554617985s: waiting for machine to come up
	I1204 19:53:03.717404   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:03.717809   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:53:03.717836   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:53:03.717785   18404 retry.go:31] will retry after 3.395715903s: waiting for machine to come up
	I1204 19:53:07.114926   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.115414   18382 main.go:141] libmachine: (addons-153447) Found IP for machine: 192.168.39.11
	I1204 19:53:07.115434   18382 main.go:141] libmachine: (addons-153447) Reserving static IP address...
	I1204 19:53:07.115445   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has current primary IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.115776   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find host DHCP lease matching {name: "addons-153447", mac: "52:54:00:39:ce:2c", ip: "192.168.39.11"} in network mk-addons-153447
	I1204 19:53:07.283246   18382 main.go:141] libmachine: (addons-153447) DBG | Getting to WaitForSSH function...
	I1204 19:53:07.283280   18382 main.go:141] libmachine: (addons-153447) Reserved static IP address: 192.168.39.11
	I1204 19:53:07.283294   18382 main.go:141] libmachine: (addons-153447) Waiting for SSH to be available...
	I1204 19:53:07.285798   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.286231   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.286261   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.286573   18382 main.go:141] libmachine: (addons-153447) DBG | Using SSH client type: external
	I1204 19:53:07.286588   18382 main.go:141] libmachine: (addons-153447) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa (-rw-------)
	I1204 19:53:07.286607   18382 main.go:141] libmachine: (addons-153447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 19:53:07.286615   18382 main.go:141] libmachine: (addons-153447) DBG | About to run SSH command:
	I1204 19:53:07.286625   18382 main.go:141] libmachine: (addons-153447) DBG | exit 0
	I1204 19:53:07.419447   18382 main.go:141] libmachine: (addons-153447) DBG | SSH cmd err, output: <nil>: 
	I1204 19:53:07.419713   18382 main.go:141] libmachine: (addons-153447) KVM machine creation complete!
	I1204 19:53:07.420082   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:53:07.426416   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:07.426639   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:07.426807   18382 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 19:53:07.426823   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:07.427988   18382 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 19:53:07.428003   18382 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 19:53:07.428011   18382 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 19:53:07.428019   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.430003   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.430401   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.430421   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.430570   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.430736   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.430884   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.431042   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.431228   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.431445   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.431460   18382 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 19:53:07.538357   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 19:53:07.538382   18382 main.go:141] libmachine: Detecting the provisioner...
	I1204 19:53:07.538392   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.541199   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.541527   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.541553   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.541698   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.541875   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.542062   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.542219   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.542373   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.542566   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.542578   18382 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 19:53:07.647935   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 19:53:07.648023   18382 main.go:141] libmachine: found compatible host: buildroot
	I1204 19:53:07.648034   18382 main.go:141] libmachine: Provisioning with buildroot...
	I1204 19:53:07.648041   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.648289   18382 buildroot.go:166] provisioning hostname "addons-153447"
	I1204 19:53:07.648309   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.648469   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.650978   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.651336   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.651367   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.651542   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.651720   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.651903   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.652047   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.652216   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.652387   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.652404   18382 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-153447 && echo "addons-153447" | sudo tee /etc/hostname
	I1204 19:53:07.772832   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-153447
	
	I1204 19:53:07.772857   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.775354   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.775668   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.775696   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.775879   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.776022   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.776187   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.776330   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.776488   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.776659   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.776675   18382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153447/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 19:53:07.891048   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 19:53:07.891083   18382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 19:53:07.891137   18382 buildroot.go:174] setting up certificates
	I1204 19:53:07.891157   18382 provision.go:84] configureAuth start
	I1204 19:53:07.891174   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.891447   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:07.894138   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.894446   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.894471   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.894631   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.896867   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.897245   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.897272   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.897443   18382 provision.go:143] copyHostCerts
	I1204 19:53:07.897523   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 19:53:07.897659   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 19:53:07.897741   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 19:53:07.897811   18382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.addons-153447 san=[127.0.0.1 192.168.39.11 addons-153447 localhost minikube]
	I1204 19:53:08.021702   18382 provision.go:177] copyRemoteCerts
	I1204 19:53:08.021779   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 19:53:08.021808   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.024316   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.024626   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.024652   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.024834   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.025007   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.025132   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.025246   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.108792   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 19:53:08.131717   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 19:53:08.153700   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 19:53:08.174745   18382 provision.go:87] duration metric: took 283.573935ms to configureAuth
	I1204 19:53:08.174770   18382 buildroot.go:189] setting minikube options for container-runtime
	I1204 19:53:08.174929   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:08.175010   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.177445   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.177751   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.177777   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.177907   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.178081   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.178215   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.178330   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.178454   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:08.178690   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:08.178709   18382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 19:53:08.408937   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 19:53:08.408970   18382 main.go:141] libmachine: Checking connection to Docker...
	I1204 19:53:08.408992   18382 main.go:141] libmachine: (addons-153447) Calling .GetURL
	I1204 19:53:08.410371   18382 main.go:141] libmachine: (addons-153447) DBG | Using libvirt version 6000000
	I1204 19:53:08.412390   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.412691   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.412714   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.412839   18382 main.go:141] libmachine: Docker is up and running!
	I1204 19:53:08.412852   18382 main.go:141] libmachine: Reticulating splines...
	I1204 19:53:08.412921   18382 client.go:171] duration metric: took 22.058627861s to LocalClient.Create
	I1204 19:53:08.412958   18382 start.go:167] duration metric: took 22.058809655s to libmachine.API.Create "addons-153447"
	I1204 19:53:08.412977   18382 start.go:293] postStartSetup for "addons-153447" (driver="kvm2")
	I1204 19:53:08.412992   18382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 19:53:08.413014   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.413282   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 19:53:08.413305   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.415344   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.415731   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.415749   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.415900   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.416054   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.416206   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.416317   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.498234   18382 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 19:53:08.502169   18382 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 19:53:08.502191   18382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 19:53:08.502248   18382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 19:53:08.502271   18382 start.go:296] duration metric: took 89.286654ms for postStartSetup
	I1204 19:53:08.502301   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:53:08.502792   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:08.505073   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.505451   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.505465   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.505680   18382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json ...
	I1204 19:53:08.505852   18382 start.go:128] duration metric: took 22.169460096s to createHost
	I1204 19:53:08.505873   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.507934   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.508266   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.508296   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.508425   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.508606   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.508720   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.508850   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.508964   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:08.509103   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:08.509119   18382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 19:53:08.615973   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733341988.586881895
	
	I1204 19:53:08.615999   18382 fix.go:216] guest clock: 1733341988.586881895
	I1204 19:53:08.616008   18382 fix.go:229] Guest: 2024-12-04 19:53:08.586881895 +0000 UTC Remote: 2024-12-04 19:53:08.505863098 +0000 UTC m=+22.270733940 (delta=81.018797ms)
	I1204 19:53:08.616051   18382 fix.go:200] guest clock delta is within tolerance: 81.018797ms
	I1204 19:53:08.616057   18382 start.go:83] releasing machines lock for "addons-153447", held for 22.279731412s
	I1204 19:53:08.616082   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.616317   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:08.619068   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.619337   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.619361   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.619505   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620009   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620157   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620261   18382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 19:53:08.620319   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.620348   18382 ssh_runner.go:195] Run: cat /version.json
	I1204 19:53:08.620370   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.622829   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.622856   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623140   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.623169   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623201   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.623217   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623261   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.623444   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.623516   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.623591   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.623660   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.623722   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.623772   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.623874   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.722699   18382 ssh_runner.go:195] Run: systemctl --version
	I1204 19:53:08.728694   18382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 19:53:08.898480   18382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 19:53:08.904661   18382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 19:53:08.904726   18382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 19:53:08.919600   18382 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 19:53:08.919628   18382 start.go:495] detecting cgroup driver to use...
	I1204 19:53:08.919688   18382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 19:53:08.935972   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 19:53:08.949389   18382 docker.go:217] disabling cri-docker service (if available) ...
	I1204 19:53:08.949471   18382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 19:53:08.963034   18382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 19:53:08.975988   18382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 19:53:09.088160   18382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 19:53:09.221122   18382 docker.go:233] disabling docker service ...
	I1204 19:53:09.221201   18382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 19:53:09.235168   18382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 19:53:09.247641   18382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 19:53:09.386188   18382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 19:53:09.510658   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 19:53:09.524768   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 19:53:09.542553   18382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 19:53:09.542636   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.553256   18382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 19:53:09.553336   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.563562   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.573318   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.583217   18382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 19:53:09.593199   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.603064   18382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.619139   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.629164   18382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 19:53:09.637905   18382 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 19:53:09.637962   18382 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 19:53:09.649712   18382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 19:53:09.658710   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:09.769867   18382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 19:53:09.855232   18382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 19:53:09.855349   18382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 19:53:09.860216   18382 start.go:563] Will wait 60s for crictl version
	I1204 19:53:09.860283   18382 ssh_runner.go:195] Run: which crictl
	I1204 19:53:09.863782   18382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 19:53:09.901750   18382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 19:53:09.901869   18382 ssh_runner.go:195] Run: crio --version
	I1204 19:53:09.928042   18382 ssh_runner.go:195] Run: crio --version
	I1204 19:53:09.956945   18382 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 19:53:09.958112   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:09.961348   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:09.961736   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:09.961764   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:09.961944   18382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 19:53:09.965808   18382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 19:53:09.978196   18382 kubeadm.go:883] updating cluster {Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 19:53:09.978342   18382 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 19:53:09.978391   18382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 19:53:10.008735   18382 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 19:53:10.008809   18382 ssh_runner.go:195] Run: which lz4
	I1204 19:53:10.012532   18382 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 19:53:10.016331   18382 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 19:53:10.016358   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 19:53:11.140810   18382 crio.go:462] duration metric: took 1.128301132s to copy over tarball
	I1204 19:53:11.140879   18382 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 19:53:13.226453   18382 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085537454s)
	I1204 19:53:13.226490   18382 crio.go:469] duration metric: took 2.085648381s to extract the tarball
	I1204 19:53:13.226502   18382 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 19:53:13.262820   18382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 19:53:13.303573   18382 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 19:53:13.303598   18382 cache_images.go:84] Images are preloaded, skipping loading
	I1204 19:53:13.303606   18382 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.2 crio true true} ...
	I1204 19:53:13.303707   18382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 19:53:13.303781   18382 ssh_runner.go:195] Run: crio config
	I1204 19:53:13.346601   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:53:13.346625   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:53:13.346634   18382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 19:53:13.346653   18382 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153447 NodeName:addons-153447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 19:53:13.346764   18382 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 19:53:13.346826   18382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 19:53:13.356610   18382 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 19:53:13.356688   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 19:53:13.365746   18382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 19:53:13.381580   18382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 19:53:13.396858   18382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1204 19:53:13.412317   18382 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I1204 19:53:13.415962   18382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 19:53:13.427359   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:13.560420   18382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 19:53:13.578047   18382 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447 for IP: 192.168.39.11
	I1204 19:53:13.578076   18382 certs.go:194] generating shared ca certs ...
	I1204 19:53:13.578095   18382 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.578248   18382 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 19:53:13.621164   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt ...
	I1204 19:53:13.621189   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt: {Name:mk5e28301d7845db54aad68aa44fc989b4fc862b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.621368   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key ...
	I1204 19:53:13.621381   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key: {Name:mk3eccff5973b34611a3e58cc387103e6760de77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.621488   18382 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 19:53:13.824572   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt ...
	I1204 19:53:13.824600   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt: {Name:mka90f4f3fae60930ae311fa0d6db47c930a21b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.824793   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key ...
	I1204 19:53:13.824808   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key: {Name:mkba8e9a6093318744dc7550f69f125ae2b58894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.824902   18382 certs.go:256] generating profile certs ...
	I1204 19:53:13.824956   18382 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key
	I1204 19:53:13.824977   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt with IP's: []
	I1204 19:53:14.024648   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt ...
	I1204 19:53:14.024687   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: {Name:mk4491ff1e36bc2732bcb103a335d60aef8bd189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.024858   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key ...
	I1204 19:53:14.024869   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key: {Name:mk76ec22598afcca1648bb9b1e52f4356aae8867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.024939   18382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934
	I1204 19:53:14.024956   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I1204 19:53:14.114481   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 ...
	I1204 19:53:14.114518   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934: {Name:mke7313468ca05545af3e6cd0fb64128caa62c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.114693   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934 ...
	I1204 19:53:14.114707   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934: {Name:mk55ad4aa8e3b4d1099e160b1f9c20f2efccb6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.114783   18382 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt
	I1204 19:53:14.114862   18382 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key
	I1204 19:53:14.114915   18382 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key
	I1204 19:53:14.114935   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt with IP's: []
	I1204 19:53:14.222659   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt ...
	I1204 19:53:14.222693   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt: {Name:mk6758753d68639fe71b0500dd58f1c7b5845b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.222864   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key ...
	I1204 19:53:14.222876   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key: {Name:mkfdb9d0829aab5b78a2c9145de4a14ea590c43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.223067   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 19:53:14.223107   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 19:53:14.223142   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 19:53:14.223169   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 19:53:14.223806   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 19:53:14.251293   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 19:53:14.285706   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 19:53:14.313559   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 19:53:14.336263   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 19:53:14.358582   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 19:53:14.380032   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 19:53:14.401302   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 19:53:14.422894   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 19:53:14.445237   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 19:53:14.460499   18382 ssh_runner.go:195] Run: openssl version
	I1204 19:53:14.465971   18382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 19:53:14.475424   18382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.479312   18382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.479383   18382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.484736   18382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 19:53:14.494033   18382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 19:53:14.497876   18382 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 19:53:14.497938   18382 kubeadm.go:392] StartCluster: {Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 19:53:14.498029   18382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 19:53:14.498096   18382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 19:53:14.531864   18382 cri.go:89] found id: ""
	I1204 19:53:14.531935   18382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 19:53:14.542046   18382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 19:53:14.551315   18382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 19:53:14.559730   18382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 19:53:14.559749   18382 kubeadm.go:157] found existing configuration files:
	
	I1204 19:53:14.559796   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 19:53:14.568044   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 19:53:14.568099   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 19:53:14.576333   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 19:53:14.584291   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 19:53:14.584335   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 19:53:14.592736   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 19:53:14.600541   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 19:53:14.600599   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 19:53:14.608905   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 19:53:14.616745   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 19:53:14.616787   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 19:53:14.624914   18382 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 19:53:14.774309   18382 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 19:53:24.573120   18382 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 19:53:24.573216   18382 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 19:53:24.573336   18382 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 19:53:24.573466   18382 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 19:53:24.573597   18382 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 19:53:24.573688   18382 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 19:53:24.575161   18382 out.go:235]   - Generating certificates and keys ...
	I1204 19:53:24.575271   18382 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 19:53:24.575334   18382 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 19:53:24.575432   18382 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 19:53:24.575489   18382 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 19:53:24.575540   18382 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 19:53:24.575583   18382 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 19:53:24.575651   18382 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 19:53:24.575784   18382 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-153447 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1204 19:53:24.575863   18382 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 19:53:24.576033   18382 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-153447 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1204 19:53:24.576130   18382 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 19:53:24.576214   18382 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 19:53:24.576278   18382 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 19:53:24.576357   18382 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 19:53:24.576410   18382 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 19:53:24.576463   18382 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 19:53:24.576523   18382 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 19:53:24.576583   18382 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 19:53:24.576641   18382 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 19:53:24.576720   18382 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 19:53:24.576781   18382 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 19:53:24.577986   18382 out.go:235]   - Booting up control plane ...
	I1204 19:53:24.578076   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 19:53:24.578147   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 19:53:24.578210   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 19:53:24.578299   18382 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 19:53:24.578374   18382 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 19:53:24.578425   18382 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 19:53:24.578535   18382 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 19:53:24.578628   18382 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 19:53:24.578679   18382 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003127844s
	I1204 19:53:24.578741   18382 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 19:53:24.578798   18382 kubeadm.go:310] [api-check] The API server is healthy after 5.001668295s
	I1204 19:53:24.578886   18382 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 19:53:24.579012   18382 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 19:53:24.579105   18382 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 19:53:24.579292   18382 kubeadm.go:310] [mark-control-plane] Marking the node addons-153447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 19:53:24.579355   18382 kubeadm.go:310] [bootstrap-token] Using token: 4bg971.gwggowzkc8ok3y10
	I1204 19:53:24.581425   18382 out.go:235]   - Configuring RBAC rules ...
	I1204 19:53:24.581515   18382 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 19:53:24.581585   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 19:53:24.581705   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 19:53:24.581826   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 19:53:24.581942   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 19:53:24.582045   18382 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 19:53:24.582147   18382 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 19:53:24.582186   18382 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 19:53:24.582248   18382 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 19:53:24.582259   18382 kubeadm.go:310] 
	I1204 19:53:24.582347   18382 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 19:53:24.582355   18382 kubeadm.go:310] 
	I1204 19:53:24.582463   18382 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 19:53:24.582471   18382 kubeadm.go:310] 
	I1204 19:53:24.582507   18382 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 19:53:24.582590   18382 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 19:53:24.582663   18382 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 19:53:24.582672   18382 kubeadm.go:310] 
	I1204 19:53:24.582745   18382 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 19:53:24.582754   18382 kubeadm.go:310] 
	I1204 19:53:24.582826   18382 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 19:53:24.582836   18382 kubeadm.go:310] 
	I1204 19:53:24.582887   18382 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 19:53:24.582963   18382 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 19:53:24.583047   18382 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 19:53:24.583055   18382 kubeadm.go:310] 
	I1204 19:53:24.583141   18382 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 19:53:24.583215   18382 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 19:53:24.583222   18382 kubeadm.go:310] 
	I1204 19:53:24.583294   18382 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4bg971.gwggowzkc8ok3y10 \
	I1204 19:53:24.583424   18382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 19:53:24.583448   18382 kubeadm.go:310] 	--control-plane 
	I1204 19:53:24.583458   18382 kubeadm.go:310] 
	I1204 19:53:24.583533   18382 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 19:53:24.583539   18382 kubeadm.go:310] 
	I1204 19:53:24.583611   18382 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4bg971.gwggowzkc8ok3y10 \
	I1204 19:53:24.583708   18382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 19:53:24.583718   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:53:24.583724   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:53:24.585128   18382 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 19:53:24.586377   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 19:53:24.597643   18382 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 19:53:24.615131   18382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 19:53:24.615224   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:24.615266   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153447 minikube.k8s.io/updated_at=2024_12_04T19_53_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=addons-153447 minikube.k8s.io/primary=true
	I1204 19:53:24.641057   18382 ops.go:34] apiserver oom_adj: -16
	I1204 19:53:24.769456   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:25.269645   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:25.770431   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:26.269624   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:26.770299   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:27.269580   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:27.769760   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:28.270087   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:28.349514   18382 kubeadm.go:1113] duration metric: took 3.73435911s to wait for elevateKubeSystemPrivileges
	I1204 19:53:28.349546   18382 kubeadm.go:394] duration metric: took 13.851614256s to StartCluster
	I1204 19:53:28.349562   18382 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:28.349670   18382 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:53:28.349994   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:28.350170   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 19:53:28.350188   18382 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 19:53:28.350234   18382 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 19:53:28.350355   18382 addons.go:69] Setting yakd=true in profile "addons-153447"
	I1204 19:53:28.350364   18382 addons.go:69] Setting ingress=true in profile "addons-153447"
	I1204 19:53:28.350377   18382 addons.go:234] Setting addon yakd=true in "addons-153447"
	I1204 19:53:28.350381   18382 addons.go:234] Setting addon ingress=true in "addons-153447"
	I1204 19:53:28.350389   18382 addons.go:69] Setting registry=true in profile "addons-153447"
	I1204 19:53:28.350408   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:28.350415   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350423   18382 addons.go:69] Setting storage-provisioner=true in profile "addons-153447"
	I1204 19:53:28.350437   18382 addons.go:69] Setting ingress-dns=true in profile "addons-153447"
	I1204 19:53:28.350441   18382 addons.go:234] Setting addon storage-provisioner=true in "addons-153447"
	I1204 19:53:28.350415   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350449   18382 addons.go:234] Setting addon ingress-dns=true in "addons-153447"
	I1204 19:53:28.350461   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350464   18382 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-153447"
	I1204 19:53:28.350463   18382 addons.go:69] Setting metrics-server=true in profile "addons-153447"
	I1204 19:53:28.350488   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350496   18382 addons.go:234] Setting addon metrics-server=true in "addons-153447"
	I1204 19:53:28.350518   18382 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-153447"
	I1204 19:53:28.350528   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350544   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350558   18382 addons.go:69] Setting default-storageclass=true in profile "addons-153447"
	I1204 19:53:28.350571   18382 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-153447"
	I1204 19:53:28.350866   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.350867   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.350889   18382 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-153447"
	I1204 19:53:28.350890   18382 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-153447"
	I1204 19:53:28.350896   18382 addons.go:69] Setting inspektor-gadget=true in profile "addons-153447"
	I1204 19:53:28.350902   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350902   18382 addons.go:69] Setting gcp-auth=true in profile "addons-153447"
	I1204 19:53:28.350907   18382 addons.go:69] Setting volcano=true in profile "addons-153447"
	I1204 19:53:28.350913   18382 addons.go:234] Setting addon inspektor-gadget=true in "addons-153447"
	I1204 19:53:28.350912   18382 addons.go:69] Setting cloud-spanner=true in profile "addons-153447"
	I1204 19:53:28.350919   18382 addons.go:234] Setting addon volcano=true in "addons-153447"
	I1204 19:53:28.350920   18382 mustload.go:65] Loading cluster: addons-153447
	I1204 19:53:28.350925   18382 addons.go:234] Setting addon cloud-spanner=true in "addons-153447"
	I1204 19:53:28.350934   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350939   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350940   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350944   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.351141   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:28.351278   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351315   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351355   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351355   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351408   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351409   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351507   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351537   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351571   18382 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-153447"
	I1204 19:53:28.351603   18382 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-153447"
	I1204 19:53:28.351636   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350426   18382 addons.go:234] Setting addon registry=true in "addons-153447"
	I1204 19:53:28.351820   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350902   18382 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-153447"
	I1204 19:53:28.352023   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.352053   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352084   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352184   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352206   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350900   18382 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153447"
	I1204 19:53:28.352614   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352619   18382 addons.go:69] Setting volumesnapshots=true in profile "addons-153447"
	I1204 19:53:28.352635   18382 addons.go:234] Setting addon volumesnapshots=true in "addons-153447"
	I1204 19:53:28.352643   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352659   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350919   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352744   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352783   18382 out.go:177] * Verifying Kubernetes components...
	I1204 19:53:28.352940   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352969   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.353372   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.353401   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.353515   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.353584   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.354199   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:28.371893   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I1204 19:53:28.372142   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I1204 19:53:28.372373   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1204 19:53:28.372491   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I1204 19:53:28.372569   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373502   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373596   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373629   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.373650   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.373662   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373734   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I1204 19:53:28.374331   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.374349   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.374487   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.374500   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.374561   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.374632   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.374811   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.374880   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.374934   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I1204 19:53:28.375112   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.375124   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.375608   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.375642   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.375758   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.375768   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.375820   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.376366   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.376377   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.376849   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.376851   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.377065   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.377086   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.379832   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.379873   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.379920   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.379951   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.380401   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.380436   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.382419   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.382454   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.383144   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.383181   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.383874   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.383956   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I1204 19:53:28.384491   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.384527   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.379836   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.384748   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.384749   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.385784   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.385801   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.386145   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.386636   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.386667   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.401284   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I1204 19:53:28.401844   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.402320   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.402340   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.402728   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.402932   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.421803   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I1204 19:53:28.422017   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33023
	I1204 19:53:28.422736   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.423288   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.423308   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.423687   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I1204 19:53:28.423813   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.423911   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I1204 19:53:28.424058   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I1204 19:53:28.424258   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.424366   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.424940   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.424959   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.425027   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425178   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.425195   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.425265   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I1204 19:53:28.425416   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
	I1204 19:53:28.425548   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425626   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.425824   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425954   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.425962   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426273   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I1204 19:53:28.426440   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.426455   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426463   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.426477   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426522   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.426862   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.426902   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.427081   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427126   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427168   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.427198   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.427239   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.427585   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.427612   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.427832   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427841   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.427883   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427928   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I1204 19:53:28.428083   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.428112   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.429008   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.429083   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.429099   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.429443   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.429459   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.429999   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.430013   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.430583   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.430605   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.430621   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.430635   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.430722   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.431169   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1204 19:53:28.431707   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.432165   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.432347   18382 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-153447"
	I1204 19:53:28.432383   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.432597   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.432621   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.433015   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.433166   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.433186   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.433528   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.433620   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I1204 19:53:28.433763   18382 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 19:53:28.434053   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.434086   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.434605   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.435116   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 19:53:28.435293   18382 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 19:53:28.435305   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 19:53:28.435324   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.435388   18382 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 19:53:28.436158   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.436176   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.436674   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.436706   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.437207   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.437227   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.437427   18382 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 19:53:28.437442   18382 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 19:53:28.437467   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.438788   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.438821   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.438829   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.438938   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:28.439321   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.439341   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.440624   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.440803   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.440911   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.441023   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.441399   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:28.441887   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.442606   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.442628   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.442846   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.443041   18382 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 19:53:28.443068   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 19:53:28.443088   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.443049   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.443725   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.443839   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.444003   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.444271   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.446859   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.447505   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.447536   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.447693   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1204 19:53:28.447870   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.448051   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.448211   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.448352   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.449016   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.449808   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.449825   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.450226   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.450379   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.452213   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.452547   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.452898   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:28.452909   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:28.454163   18382 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 19:53:28.454617   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:28.454648   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:28.454655   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:28.454667   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:28.454674   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:28.454906   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:28.454922   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	W1204 19:53:28.454997   18382 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 19:53:28.455328   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 19:53:28.455348   18382 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 19:53:28.455393   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.456159   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I1204 19:53:28.456640   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.457083   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.457100   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.457413   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.457903   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.457935   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.459315   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.459799   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.459820   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.459989   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.460201   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.460352   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.460491   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.466520   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 19:53:28.467003   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.467534   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.467550   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.467975   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.468137   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.469373   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1204 19:53:28.469698   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.470723   18382 addons.go:234] Setting addon default-storageclass=true in "addons-153447"
	I1204 19:53:28.470759   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.471140   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.471189   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.471494   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1204 19:53:28.471967   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.472438   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.472464   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.472623   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I1204 19:53:28.472890   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.472906   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.472980   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.473240   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.473306   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.473442   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.473523   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.474507   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.474527   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.474927   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.475129   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.475186   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.476013   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.477612   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.477805   18382 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 19:53:28.477993   18382 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 19:53:28.478941   18382 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 19:53:28.478960   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 19:53:28.478978   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.479058   18382 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 19:53:28.479155   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 19:53:28.479166   18382 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 19:53:28.479183   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.480755   18382 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 19:53:28.480773   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 19:53:28.480790   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.482139   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I1204 19:53:28.482394   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I1204 19:53:28.482799   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.482841   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.483016   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483245   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.483268   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.483425   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.483438   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.483505   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.483523   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483555   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483623   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.483743   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.483817   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.483864   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.483907   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.484030   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.484141   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.484225   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.484245   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.484271   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.484535   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.484782   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.484928   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.485059   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.485927   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.486309   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.486331   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I1204 19:53:28.486793   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.486829   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.487088   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.487249   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.487316   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.487521   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.487538   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.487709   18382 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 19:53:28.487874   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I1204 19:53:28.487699   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.488508   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.488664   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.488682   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.488988   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.488993   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 19:53:28.489093   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.489109   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.489253   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.489539   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.489749   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.490026   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
	I1204 19:53:28.490098   18382 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 19:53:28.490419   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40031
	I1204 19:53:28.490669   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.491054   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.491211   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.491223   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.491290   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 19:53:28.491355   18382 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 19:53:28.491464   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 19:53:28.491488   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.491408   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.492468   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.492661   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.492702   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.493403   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.493530   18382 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 19:53:28.493733   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.493780   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.494296   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.494332   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.494515   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 19:53:28.495199   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.495676   18382 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 19:53:28.495677   18382 out.go:177]   - Using image docker.io/busybox:stable
	I1204 19:53:28.496447   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.496489   18382 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 19:53:28.497186   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.497209   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.497221   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.497269   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 19:53:28.497808   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.497876   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1204 19:53:28.497362   18382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 19:53:28.498014   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 19:53:28.498028   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.497470   18382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 19:53:28.498065   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 19:53:28.498076   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.498120   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.498151   18382 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 19:53:28.498166   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 19:53:28.498185   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.499117   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.499232   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.499727   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.499747   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.500192   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 19:53:28.500531   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.501188   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.501236   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.502177   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.502668   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 19:53:28.503022   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503537   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503582   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.503621   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503762   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.503976   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.504063   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.504081   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.504150   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.504286   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.504313   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.504469   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.504581   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.504611   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.504631   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.504797   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.504841   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.505047   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.505129   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 19:53:28.505230   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.505412   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.507494   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 19:53:28.508625   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 19:53:28.508646   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 19:53:28.508664   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.511086   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.511454   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.511472   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.511658   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.511811   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.512002   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.512087   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.517756   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I1204 19:53:28.518229   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.518712   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.518736   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.519078   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.519272   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.521101   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.521344   18382 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 19:53:28.521362   18382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 19:53:28.521378   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.523994   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I1204 19:53:28.524368   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.524468   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.524901   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.524925   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.525068   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.525082   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.525339   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.525398   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.525502   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.525706   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.525745   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.525905   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.527025   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.528589   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 19:53:28.529637   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 19:53:28.529654   18382 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 19:53:28.529672   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.532265   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.532655   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.532675   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.532827   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.532960   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.533069   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.533183   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.791136   18382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 19:53:28.791356   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 19:53:28.836263   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 19:53:28.877959   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 19:53:28.889693   18382 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 19:53:28.889730   18382 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 19:53:28.898447   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 19:53:28.898474   18382 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 19:53:28.916551   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 19:53:28.916580   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 19:53:28.934017   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 19:53:28.937227   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 19:53:28.938167   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 19:53:28.938183   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 19:53:28.969198   18382 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 19:53:28.969226   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 19:53:28.974617   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 19:53:28.992706   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 19:53:29.024371   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 19:53:29.057413   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 19:53:29.057443   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 19:53:29.067432   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 19:53:29.067457   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 19:53:29.079146   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 19:53:29.090706   18382 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 19:53:29.090733   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 19:53:29.112878   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 19:53:29.127933   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 19:53:29.127960   18382 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 19:53:29.152692   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 19:53:29.152720   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 19:53:29.299978   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 19:53:29.300004   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 19:53:29.310519   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 19:53:29.310539   18382 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 19:53:29.330201   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 19:53:29.330229   18382 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 19:53:29.384443   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 19:53:29.442873   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 19:53:29.442902   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 19:53:29.485146   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 19:53:29.485175   18382 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 19:53:29.542323   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 19:53:29.542348   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 19:53:29.561483   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 19:53:29.561509   18382 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 19:53:29.629813   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 19:53:29.629837   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 19:53:29.689896   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 19:53:29.741139   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 19:53:29.800463   18382 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:29.800499   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 19:53:29.938201   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 19:53:29.938230   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 19:53:30.097846   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:30.279056   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 19:53:30.279080   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 19:53:30.466114   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 19:53:30.466143   18382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 19:53:30.570343   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 19:53:30.570368   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 19:53:30.907114   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 19:53:30.907158   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 19:53:31.063392   18382 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.271962694s)
	I1204 19:53:31.063429   18382 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 19:53:31.063419   18382 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.272247284s)
	I1204 19:53:31.064120   18382 node_ready.go:35] waiting up to 6m0s for node "addons-153447" to be "Ready" ...
	I1204 19:53:31.070802   18382 node_ready.go:49] node "addons-153447" has status "Ready":"True"
	I1204 19:53:31.070825   18382 node_ready.go:38] duration metric: took 6.686231ms for node "addons-153447" to be "Ready" ...
	I1204 19:53:31.070834   18382 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 19:53:31.083328   18382 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:31.225222   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 19:53:31.225253   18382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 19:53:31.434331   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 19:53:31.598569   18382 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153447" context rescaled to 1 replicas
	I1204 19:53:31.929357   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.093052264s)
	I1204 19:53:31.929395   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.051400582s)
	I1204 19:53:31.929419   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929433   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929433   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929448   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929829   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.929868   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.929831   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.929890   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.929894   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.929903   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.929915   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929929   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929916   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929991   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.930199   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.930211   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.931473   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.931481   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.931487   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464063   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.530007938s)
	I1204 19:53:32.464101   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.526841327s)
	I1204 19:53:32.464114   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464126   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464135   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464169   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464504   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464512   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.464517   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.464523   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464530   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464531   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464539   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464547   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464533   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464616   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464963   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464998   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.465005   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.465040   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.465057   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:33.107164   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:34.519468   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.544815218s)
	I1204 19:53:34.519522   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.519535   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.519783   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:34.519832   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.519845   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.519855   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.519864   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.520136   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.520193   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.632436   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.632464   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.632830   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.632851   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.632852   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:35.146477   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:35.440551   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 19:53:35.440590   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:35.443839   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:35.444235   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:35.444264   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:35.444492   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:35.444694   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:35.444842   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:35.444964   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:36.029037   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 19:53:36.231598   18382 addons.go:234] Setting addon gcp-auth=true in "addons-153447"
	I1204 19:53:36.231654   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:36.232071   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:36.232129   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:36.247806   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I1204 19:53:36.248173   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:36.248640   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:36.248657   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:36.248932   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:36.249416   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:36.249451   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:36.264438   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I1204 19:53:36.264862   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:36.265398   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:36.265427   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:36.265755   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:36.265938   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:36.267605   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:36.267848   18382 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 19:53:36.267871   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:36.271328   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:36.271858   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:36.271887   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:36.272083   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:36.272317   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:36.272497   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:36.272645   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:37.182661   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:37.274470   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.281725927s)
	I1204 19:53:37.274525   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274537   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274533   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.25012456s)
	I1204 19:53:37.274572   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274590   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274590   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.195410937s)
	I1204 19:53:37.274620   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274637   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274670   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.161761122s)
	I1204 19:53:37.274701   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274715   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274731   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.890244642s)
	I1204 19:53:37.274755   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274771   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274833   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.584905888s)
	I1204 19:53:37.274855   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274865   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274951   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.533776535s)
	I1204 19:53:37.274971   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274981   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.275105   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.177211394s)
	W1204 19:53:37.275135   18382 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 19:53:37.275179   18382 retry.go:31] will retry after 231.369537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 19:53:37.275541   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.275572   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.275579   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.275587   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.275593   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.275877   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.275903   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.275910   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.275927   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.275933   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276910   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276926   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276935   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276939   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276945   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276949   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276956   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276963   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276970   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276914   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.276983   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276991   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276999   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277125   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277167   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277186   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277193   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277201   18382 addons.go:475] Verifying addon metrics-server=true in "addons-153447"
	I1204 19:53:37.277245   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277266   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277272   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277432   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277443   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277456   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277485   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277493   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277628   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277654   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277661   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277667   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.277673   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277719   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277744   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277753   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277761   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.277767   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277831   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277900   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277950   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277956   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277964   18382 addons.go:475] Verifying addon registry=true in "addons-153447"
	I1204 19:53:37.277975   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277986   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277994   18382 addons.go:475] Verifying addon ingress=true in "addons-153447"
	I1204 19:53:37.278219   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.278242   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.278249   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.279985   18382 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153447 service yakd-dashboard -n yakd-dashboard
	
	I1204 19:53:37.279998   18382 out.go:177] * Verifying registry addon...
	I1204 19:53:37.280942   18382 out.go:177] * Verifying ingress addon...
	I1204 19:53:37.282404   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 19:53:37.283342   18382 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 19:53:37.309800   18382 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 19:53:37.309821   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:37.316903   18382 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 19:53:37.316930   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:37.340004   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.340026   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.340316   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.340333   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.340366   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.507358   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:37.794641   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:37.794641   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:38.291294   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:38.291550   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:38.580607   18382 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.312733098s)
	I1204 19:53:38.580624   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.146246374s)
	I1204 19:53:38.580671   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:38.580687   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:38.580966   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:38.581016   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:38.581024   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:38.581036   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:38.581055   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:38.581295   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:38.581315   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:38.581328   18382 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-153447"
	I1204 19:53:38.582309   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:38.583207   18382 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 19:53:38.584718   18382 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 19:53:38.585668   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 19:53:38.585825   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 19:53:38.585842   18382 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 19:53:38.590761   18382 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 19:53:38.590783   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:38.692452   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 19:53:38.692483   18382 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 19:53:38.713516   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 19:53:38.713543   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 19:53:38.732720   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 19:53:39.078583   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.078997   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:39.101007   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:39.287624   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.287894   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:39.505167   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.997747672s)
	I1204 19:53:39.505215   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.505247   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.505513   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:39.505565   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.505580   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.505596   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.505609   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.505813   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.505829   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.590974   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:39.609504   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:39.712750   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.712773   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.713043   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:39.713088   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.713110   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.713136   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.713145   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.713390   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.713407   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.714283   18382 addons.go:475] Verifying addon gcp-auth=true in "addons-153447"
	I1204 19:53:39.715755   18382 out.go:177] * Verifying gcp-auth addon...
	I1204 19:53:39.717529   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 19:53:39.725153   18382 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 19:53:39.725183   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:39.792045   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.792342   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.093092   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:40.221453   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:40.289727   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:40.290792   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.597906   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:40.726081   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:40.787615   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.787866   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.091556   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:41.221152   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:41.287411   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:41.287990   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.592130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:41.721074   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:41.786508   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.787790   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:42.090803   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:42.091086   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:42.221132   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:42.286683   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:42.288900   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:42.595761   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:42.720481   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:42.787536   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:42.787823   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:43.092982   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:43.221683   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:43.287085   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:43.287643   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:43.591288   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:43.721509   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:43.787451   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:43.788093   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:44.099453   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:44.100509   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:44.221826   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:44.285844   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:44.287028   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:44.599351   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:44.721065   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:44.788303   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:44.791732   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:45.092444   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:45.221951   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:45.288145   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:45.289113   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:45.589541   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:45.721219   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:45.786380   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:45.787955   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:46.093922   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:46.221524   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:46.286258   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:46.289315   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:46.590042   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:46.590231   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:46.721376   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:46.787242   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:46.788037   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:47.090896   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:47.222253   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:47.286820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:47.287353   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:47.590604   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:47.720855   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:47.785745   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:47.787650   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:48.091893   18382 pod_ready.go:93] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"True"
	I1204 19:53:48.091918   18382 pod_ready.go:82] duration metric: took 17.008558173s for pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.091931   18382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.092225   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:48.094112   18382 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mmw65" not found
	I1204 19:53:48.094137   18382 pod_ready.go:82] duration metric: took 2.198228ms for pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace to be "Ready" ...
	E1204 19:53:48.094153   18382 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mmw65" not found
	I1204 19:53:48.094162   18382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.221897   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:48.286495   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:48.288634   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:48.591319   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:48.720855   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:48.785860   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:48.788376   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:49.160677   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:49.220967   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:49.285987   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:49.288111   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:49.590795   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:49.721810   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:49.785939   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:49.787561   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.091620   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:50.102768   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:50.222827   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:50.285638   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:50.287912   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.590452   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:50.721499   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:50.788050   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.788663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.090149   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:51.221187   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:51.286451   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.288997   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:51.590818   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:51.720673   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:51.785842   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.788614   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:52.091049   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:52.221907   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:52.285848   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:52.288067   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:52.590755   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:52.600325   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:52.721141   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:52.785986   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:52.790868   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:53.090712   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:53.220484   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:53.287364   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:53.289743   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:53.591164   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:53.721130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:53.787245   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:53.789514   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:54.090554   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:54.223115   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:54.289581   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:54.289738   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:54.590997   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:54.601032   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:54.721208   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:54.787299   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:54.787356   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:55.090725   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:55.223010   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:55.288330   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:55.290916   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:55.591687   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:55.721597   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:55.787949   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:55.788445   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:56.089821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:56.220633   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:56.333348   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:56.334066   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:56.590455   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:56.601421   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:56.721524   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:56.787127   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:56.787683   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:57.090476   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:57.222842   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:57.286327   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:57.287623   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:57.589974   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:57.721708   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:57.785931   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:57.787427   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:58.090291   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:58.223710   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:58.285821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:58.290468   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:58.592236   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:58.601633   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:58.721346   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:58.786500   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:58.788053   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:59.090968   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:59.220884   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:59.286057   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:59.287538   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:59.590960   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:59.721710   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:59.785844   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:59.788673   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:00.090723   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:00.221780   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:00.287893   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:00.289481   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:00.590392   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:00.720361   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:00.786496   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:00.786816   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:01.090941   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:01.100053   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:01.221130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:01.287209   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:01.288152   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:01.590697   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:01.720602   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:01.787849   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:01.788102   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:02.092155   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:02.222260   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:02.286374   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:02.287723   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:02.592905   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:02.721156   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:02.786574   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:02.787525   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:03.090973   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:03.221261   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:03.286734   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:03.286856   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:03.590756   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:03.599636   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:03.721758   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:03.785672   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:03.788676   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:04.092159   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:04.221037   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:04.286639   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:04.288086   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:04.591562   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:04.721488   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:04.787493   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:04.787804   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:05.092011   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:05.222016   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:05.286019   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:05.287689   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:05.590663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:05.600741   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:05.722028   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:05.786504   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:05.788246   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.090143   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:06.220977   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:06.286730   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:06.287080   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.970225   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:06.970378   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:06.971228   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.971320   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.091025   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.221228   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:07.287258   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:07.288203   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:07.591145   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.721741   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:07.786445   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:07.787160   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:08.092369   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:08.100015   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:08.221217   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:08.286361   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:08.287695   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:08.590923   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:08.721431   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:08.787421   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:08.787702   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:09.090704   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:09.220799   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:09.287084   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:09.288233   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:09.590314   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:09.721414   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:09.786454   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:09.788167   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.090501   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:10.100077   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:10.220898   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:10.286143   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:10.287835   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.590567   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:10.723754   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:10.827207   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.827613   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:11.092813   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:11.221861   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:11.289113   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:11.289274   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:11.591107   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:11.722174   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:11.786439   18382 kapi.go:107] duration metric: took 34.50403163s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 19:54:11.787475   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.090154   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:12.221174   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:12.287982   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.590833   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:12.599432   18382 pod_ready.go:93] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.599452   18382 pod_ready.go:82] duration metric: took 24.505278556s for pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.599465   18382 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.607336   18382 pod_ready.go:93] pod "etcd-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.607356   18382 pod_ready.go:82] duration metric: took 7.883774ms for pod "etcd-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.607364   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.612511   18382 pod_ready.go:93] pod "kube-apiserver-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.612539   18382 pod_ready.go:82] duration metric: took 5.167723ms for pod "kube-apiserver-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.612552   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.617426   18382 pod_ready.go:93] pod "kube-controller-manager-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.617451   18382 pod_ready.go:82] duration metric: took 4.890876ms for pod "kube-controller-manager-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.617465   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zf92b" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.621958   18382 pod_ready.go:93] pod "kube-proxy-zf92b" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.621983   18382 pod_ready.go:82] duration metric: took 4.508931ms for pod "kube-proxy-zf92b" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.621994   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.720692   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:12.787986   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.997495   18382 pod_ready.go:93] pod "kube-scheduler-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.997521   18382 pod_ready.go:82] duration metric: took 375.518192ms for pod "kube-scheduler-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.997534   18382 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:13.090950   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:13.221532   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:13.322733   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:13.398260   18382 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:13.398287   18382 pod_ready.go:82] duration metric: took 400.746258ms for pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:13.398295   18382 pod_ready.go:39] duration metric: took 42.327451842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 19:54:13.398311   18382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 19:54:13.398368   18382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 19:54:13.414944   18382 api_server.go:72] duration metric: took 45.064720695s to wait for apiserver process to appear ...
	I1204 19:54:13.414974   18382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 19:54:13.414997   18382 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I1204 19:54:13.418912   18382 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I1204 19:54:13.419830   18382 api_server.go:141] control plane version: v1.31.2
	I1204 19:54:13.419853   18382 api_server.go:131] duration metric: took 4.870261ms to wait for apiserver health ...
	I1204 19:54:13.419861   18382 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 19:54:13.590267   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:13.604583   18382 system_pods.go:59] 18 kube-system pods found
	I1204 19:54:13.604627   18382 system_pods.go:61] "amd-gpu-device-plugin-7r8d9" [fe74ca1b-56c6-4e61-8ec2-380d38f63b82] Running
	I1204 19:54:13.604635   18382 system_pods.go:61] "coredns-7c65d6cfc9-mq69t" [cc725230-25f4-41a8-8292-110a5d46949e] Running
	I1204 19:54:13.604647   18382 system_pods.go:61] "csi-hostpath-attacher-0" [f75aea48-e36c-4a2a-bce3-111bfd1969e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1204 19:54:13.604656   18382 system_pods.go:61] "csi-hostpath-resizer-0" [d9731c2f-4a6a-4288-a027-a36c4c6d07e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1204 19:54:13.604669   18382 system_pods.go:61] "csi-hostpathplugin-n2cqq" [83b3d723-9b62-4978-be37-b785e988c34a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1204 19:54:13.604677   18382 system_pods.go:61] "etcd-addons-153447" [1369d72d-8e1c-479c-88e3-c58557965f52] Running
	I1204 19:54:13.604688   18382 system_pods.go:61] "kube-apiserver-addons-153447" [03222aee-bd05-4835-9156-7cc8960c9f9e] Running
	I1204 19:54:13.604694   18382 system_pods.go:61] "kube-controller-manager-addons-153447" [1b967963-da0a-4811-be88-38bbbff51d02] Running
	I1204 19:54:13.604700   18382 system_pods.go:61] "kube-ingress-dns-minikube" [6fdb24ab-7096-4556-a232-4d26f7552507] Running
	I1204 19:54:13.604706   18382 system_pods.go:61] "kube-proxy-zf92b" [c194c0d0-590f-41dc-9ca2-83e611918692] Running
	I1204 19:54:13.604713   18382 system_pods.go:61] "kube-scheduler-addons-153447" [3b9516d1-0cbf-4986-b0e2-5dc037fd3bd3] Running
	I1204 19:54:13.604725   18382 system_pods.go:61] "metrics-server-84c5f94fbc-gpnml" [3e5584b2-5c1f-4acb-93d3-614ecdb4794c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 19:54:13.604731   18382 system_pods.go:61] "nvidia-device-plugin-daemonset-jgz4f" [eae62c73-3a4f-42eb-baac-f18cf9160aea] Running
	I1204 19:54:13.604737   18382 system_pods.go:61] "registry-66c9cd494c-z8xlj" [cf078efa-efba-4b9e-a26c-686f93cabca9] Running
	I1204 19:54:13.604742   18382 system_pods.go:61] "registry-proxy-7c5pj" [fed31c9e-468a-4b59-b8f2-1efd30fa0e42] Running
	I1204 19:54:13.604752   18382 system_pods.go:61] "snapshot-controller-56fcc65765-hqgzv" [b68cbc05-af0c-4b3f-906c-57a6bfa5d95a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:13.604761   18382 system_pods.go:61] "snapshot-controller-56fcc65765-vdkgn" [d9ee7ee2-7750-4e15-9453-11fabb300d00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:13.604768   18382 system_pods.go:61] "storage-provisioner" [fa71a22c-f55d-460d-b2cc-7aa569c3badc] Running
	I1204 19:54:13.604776   18382 system_pods.go:74] duration metric: took 184.908661ms to wait for pod list to return data ...
	I1204 19:54:13.604789   18382 default_sa.go:34] waiting for default service account to be created ...
	I1204 19:54:13.721528   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:13.787884   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:13.796889   18382 default_sa.go:45] found service account: "default"
	I1204 19:54:13.796910   18382 default_sa.go:55] duration metric: took 192.114242ms for default service account to be created ...
	I1204 19:54:13.796921   18382 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 19:54:14.003711   18382 system_pods.go:86] 18 kube-system pods found
	I1204 19:54:14.003737   18382 system_pods.go:89] "amd-gpu-device-plugin-7r8d9" [fe74ca1b-56c6-4e61-8ec2-380d38f63b82] Running
	I1204 19:54:14.003744   18382 system_pods.go:89] "coredns-7c65d6cfc9-mq69t" [cc725230-25f4-41a8-8292-110a5d46949e] Running
	I1204 19:54:14.003750   18382 system_pods.go:89] "csi-hostpath-attacher-0" [f75aea48-e36c-4a2a-bce3-111bfd1969e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1204 19:54:14.003759   18382 system_pods.go:89] "csi-hostpath-resizer-0" [d9731c2f-4a6a-4288-a027-a36c4c6d07e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1204 19:54:14.003766   18382 system_pods.go:89] "csi-hostpathplugin-n2cqq" [83b3d723-9b62-4978-be37-b785e988c34a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1204 19:54:14.003770   18382 system_pods.go:89] "etcd-addons-153447" [1369d72d-8e1c-479c-88e3-c58557965f52] Running
	I1204 19:54:14.003774   18382 system_pods.go:89] "kube-apiserver-addons-153447" [03222aee-bd05-4835-9156-7cc8960c9f9e] Running
	I1204 19:54:14.003778   18382 system_pods.go:89] "kube-controller-manager-addons-153447" [1b967963-da0a-4811-be88-38bbbff51d02] Running
	I1204 19:54:14.003782   18382 system_pods.go:89] "kube-ingress-dns-minikube" [6fdb24ab-7096-4556-a232-4d26f7552507] Running
	I1204 19:54:14.003785   18382 system_pods.go:89] "kube-proxy-zf92b" [c194c0d0-590f-41dc-9ca2-83e611918692] Running
	I1204 19:54:14.003791   18382 system_pods.go:89] "kube-scheduler-addons-153447" [3b9516d1-0cbf-4986-b0e2-5dc037fd3bd3] Running
	I1204 19:54:14.003797   18382 system_pods.go:89] "metrics-server-84c5f94fbc-gpnml" [3e5584b2-5c1f-4acb-93d3-614ecdb4794c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 19:54:14.003805   18382 system_pods.go:89] "nvidia-device-plugin-daemonset-jgz4f" [eae62c73-3a4f-42eb-baac-f18cf9160aea] Running
	I1204 19:54:14.003809   18382 system_pods.go:89] "registry-66c9cd494c-z8xlj" [cf078efa-efba-4b9e-a26c-686f93cabca9] Running
	I1204 19:54:14.003812   18382 system_pods.go:89] "registry-proxy-7c5pj" [fed31c9e-468a-4b59-b8f2-1efd30fa0e42] Running
	I1204 19:54:14.003819   18382 system_pods.go:89] "snapshot-controller-56fcc65765-hqgzv" [b68cbc05-af0c-4b3f-906c-57a6bfa5d95a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:14.003827   18382 system_pods.go:89] "snapshot-controller-56fcc65765-vdkgn" [d9ee7ee2-7750-4e15-9453-11fabb300d00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:14.003834   18382 system_pods.go:89] "storage-provisioner" [fa71a22c-f55d-460d-b2cc-7aa569c3badc] Running
	I1204 19:54:14.003841   18382 system_pods.go:126] duration metric: took 206.914922ms to wait for k8s-apps to be running ...
	I1204 19:54:14.003848   18382 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 19:54:14.003884   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 19:54:14.022199   18382 system_svc.go:56] duration metric: took 18.341786ms WaitForService to wait for kubelet
	I1204 19:54:14.022228   18382 kubeadm.go:582] duration metric: took 45.672008618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 19:54:14.022251   18382 node_conditions.go:102] verifying NodePressure condition ...
	I1204 19:54:14.090694   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:14.198199   18382 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 19:54:14.198238   18382 node_conditions.go:123] node cpu capacity is 2
	I1204 19:54:14.198255   18382 node_conditions.go:105] duration metric: took 175.998137ms to run NodePressure ...
	I1204 19:54:14.198271   18382 start.go:241] waiting for startup goroutines ...
	I1204 19:54:14.220588   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:14.287177   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:14.590647   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:14.720613   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:14.789593   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:15.092345   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:15.221835   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:15.287287   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:15.591243   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:15.723305   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:15.788538   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:16.091230   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:16.221036   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:16.289108   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:16.590617   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:16.720820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:16.787472   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.092155   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:17.221205   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:17.287524   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.883687   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:17.884015   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.884126   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.090709   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.220786   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:18.287148   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:18.589936   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.721519   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:18.788626   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:19.089905   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:19.221394   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:19.287939   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:19.591521   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:19.721398   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:19.788262   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:20.090970   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:20.221222   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:20.290582   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:20.735591   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:20.736049   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:20.788025   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:21.091226   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:21.221530   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:21.288372   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:21.592293   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:21.721663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:21.787926   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:22.091721   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:22.220975   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:22.288290   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:22.590866   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:22.721283   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:22.787943   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:23.090919   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:23.221227   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:23.287682   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:23.590113   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:23.721640   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:23.787018   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:24.091082   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:24.221493   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:24.290503   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:24.594763   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:24.720821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.019311   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:25.119869   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:25.221136   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.287622   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:25.590446   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:25.721853   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.787486   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:26.090758   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:26.221861   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:26.287684   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:26.590291   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:26.721748   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:26.787550   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:27.090421   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:27.221268   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:27.287936   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:27.590342   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:27.728534   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:27.789092   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:28.090731   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:28.221357   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:28.287631   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:28.591237   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:28.721136   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:28.788255   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:29.094816   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:29.221723   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:29.324337   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:29.591209   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:29.731131   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:29.791102   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:30.090959   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:30.221022   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:30.287712   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:30.589650   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:30.721231   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.166633   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:31.174196   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:31.222657   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.287639   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:31.590185   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:31.721715   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.787945   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:32.090596   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:32.221007   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:32.297668   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:32.590251   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:32.722285   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:32.823389   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:33.090520   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:33.221464   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:33.288488   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:33.590506   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:33.721362   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:33.788081   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:34.090772   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:34.221190   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:34.324201   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:34.591100   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:34.720756   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:34.787553   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:35.090130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:35.221979   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:35.323481   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:35.592963   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:35.722628   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:35.787040   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.090803   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:36.221681   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:36.287211   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.815923   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:36.816167   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.816679   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.091275   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.221213   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:37.324534   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:37.591362   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.720595   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:37.788149   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:38.090170   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:38.221938   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:38.288471   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:38.591305   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:38.721753   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:38.787820   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:39.091497   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:39.221652   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:39.287640   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:39.592701   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:39.721926   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:39.788428   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:40.090820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:40.224114   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:40.327922   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:40.590780   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:40.720751   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:40.787444   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:41.090433   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:41.221742   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:41.290084   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:41.591323   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:41.720682   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:41.788828   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:42.090543   18382 kapi.go:107] duration metric: took 1m3.504873508s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 19:54:42.221084   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:42.287969   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:42.721269   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:42.823472   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:43.221085   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:43.324103   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:43.722482   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:43.788780   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:44.222393   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:44.288866   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:44.721251   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:44.788474   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:45.221991   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:45.727116   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:45.850385   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:45.850861   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:46.220497   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:46.287521   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:46.721018   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:46.787618   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:47.220859   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:47.287828   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:47.733565   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:47.788501   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:48.220897   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:48.287242   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:48.722270   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:48.788731   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:49.222517   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:49.323716   18382 kapi.go:107] duration metric: took 1m12.04037086s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 19:54:49.721726   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:50.222093   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:50.721787   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:51.221050   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:51.720902   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:52.220885   18382 kapi.go:107] duration metric: took 1m12.503353084s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 19:54:52.222316   18382 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-153447 cluster.
	I1204 19:54:52.223620   18382 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 19:54:52.225054   18382 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 19:54:52.226490   18382 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, inspektor-gadget, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1204 19:54:52.227818   18382 addons.go:510] duration metric: took 1m23.877592502s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server inspektor-gadget storage-provisioner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1204 19:54:52.227857   18382 start.go:246] waiting for cluster config update ...
	I1204 19:54:52.227875   18382 start.go:255] writing updated cluster config ...
	I1204 19:54:52.228096   18382 ssh_runner.go:195] Run: rm -f paused
	I1204 19:54:52.279506   18382 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 19:54:52.281406   18382 out.go:177] * Done! kubectl is now configured to use "addons-153447" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.666860275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=999f570b-744d-4679-8cee-9f7f52ed4401 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.667319792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a771fd51c05d9e34468f562f135ac9d84173c352c762e0cccfba52171f8af1f5,PodSandboxId:3679e5530db3f9787d73e5665442295ce6c42e0e3644b99c9bda5f762e958ac8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733342088454170898,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-f6frm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6800bc7-e04a-4720-a4c3-a48990ab58c5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2db9417edc3057a7027ed992ae74351bf95af4bb514e806bf9d1d7f2710e4cca,PodSandboxId:9ec8f2880007d88b9fe0c7483773c27292fedf41ea00dddeb86349aa5c3f678c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733342069781912919,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbf4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753bceff-d6ce-480f-8189-fb5950c00513,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902734e47ab3e3c5aa871cae2ae3163c24e624e07aa7d111ac8bf8c3ffc230f6,PodSandboxId:14391fed3f78ad26b6e6dabae8d685f5031a97b58bd40d0341218fd3c746dd8d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733342069212368991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7j5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af400b19-f03c-4778-b608-e525f19e468c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c9be5be2753465b346f21c37d08daedd2dcafb23c7b77032be80199abf21ab,PodSandboxId:c5e9212b4058098919f070de350fccca2456e86cf140cc73163bd50b43ff4032,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733342024549915990,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fdb24ab-7096-4556-a232-4d26f7552507,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72f
e25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe6445
8a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=999f570b-744d-4679-8cee-9f7f52ed4401 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.707086407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75418d2d-1dc2-4992-b0ec-3bfbd2e2d6e5 name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.707161188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75418d2d-1dc2-4992-b0ec-3bfbd2e2d6e5 name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.708224732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7607ec55-ff06-48a2-9271-faabb1c3dbe0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.709890813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342329709860637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7607ec55-ff06-48a2-9271-faabb1c3dbe0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.710685258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e3e7d2f-62e8-4e3b-9edd-70522b277f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.710770480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e3e7d2f-62e8-4e3b-9edd-70522b277f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.711802040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a771fd51c05d9e34468f562f135ac9d84173c352c762e0cccfba52171f8af1f5,PodSandboxId:3679e5530db3f9787d73e5665442295ce6c42e0e3644b99c9bda5f762e958ac8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733342088454170898,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-f6frm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6800bc7-e04a-4720-a4c3-a48990ab58c5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2db9417edc3057a7027ed992ae74351bf95af4bb514e806bf9d1d7f2710e4cca,PodSandboxId:9ec8f2880007d88b9fe0c7483773c27292fedf41ea00dddeb86349aa5c3f678c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733342069781912919,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbf4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753bceff-d6ce-480f-8189-fb5950c00513,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902734e47ab3e3c5aa871cae2ae3163c24e624e07aa7d111ac8bf8c3ffc230f6,PodSandboxId:14391fed3f78ad26b6e6dabae8d685f5031a97b58bd40d0341218fd3c746dd8d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733342069212368991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7j5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af400b19-f03c-4778-b608-e525f19e468c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c9be5be2753465b346f21c37d08daedd2dcafb23c7b77032be80199abf21ab,PodSandboxId:c5e9212b4058098919f070de350fccca2456e86cf140cc73163bd50b43ff4032,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733342024549915990,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fdb24ab-7096-4556-a232-4d26f7552507,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72f
e25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe6445
8a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=6e3e7d2f-62e8-4e3b-9edd-70522b277f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.740434343Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.740673565Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.752112570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5172148-c219-4cb6-8e8a-6ef8ffc813bb name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.752184514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5172148-c219-4cb6-8e8a-6ef8ffc813bb name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.753926626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7409f67-f7ff-439a-aa37-f64da8598911 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.755096763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342329755063377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7409f67-f7ff-439a-aa37-f64da8598911 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.755766903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6434b3a-135e-49c1-b053-b78b347bb9d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.755824066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6434b3a-135e-49c1-b053-b78b347bb9d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.756192304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a771fd51c05d9e34468f562f135ac9d84173c352c762e0cccfba52171f8af1f5,PodSandboxId:3679e5530db3f9787d73e5665442295ce6c42e0e3644b99c9bda5f762e958ac8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733342088454170898,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-f6frm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6800bc7-e04a-4720-a4c3-a48990ab58c5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2db9417edc3057a7027ed992ae74351bf95af4bb514e806bf9d1d7f2710e4cca,PodSandboxId:9ec8f2880007d88b9fe0c7483773c27292fedf41ea00dddeb86349aa5c3f678c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733342069781912919,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbf4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753bceff-d6ce-480f-8189-fb5950c00513,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902734e47ab3e3c5aa871cae2ae3163c24e624e07aa7d111ac8bf8c3ffc230f6,PodSandboxId:14391fed3f78ad26b6e6dabae8d685f5031a97b58bd40d0341218fd3c746dd8d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733342069212368991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7j5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af400b19-f03c-4778-b608-e525f19e468c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c9be5be2753465b346f21c37d08daedd2dcafb23c7b77032be80199abf21ab,PodSandboxId:c5e9212b4058098919f070de350fccca2456e86cf140cc73163bd50b43ff4032,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733342024549915990,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fdb24ab-7096-4556-a232-4d26f7552507,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72f
e25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe6445
8a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=a6434b3a-135e-49c1-b053-b78b347bb9d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.786390864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ff2dcc9-e16f-4d3c-b4b5-4b5c5e00d669 name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.786478749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ff2dcc9-e16f-4d3c-b4b5-4b5c5e00d669 name=/runtime.v1.RuntimeService/Version
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.787860252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8a3ae34-85fc-4aac-bd8e-3b0d7e2ac707 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.789209794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342329789183280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8a3ae34-85fc-4aac-bd8e-3b0d7e2ac707 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.789862279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e469ae-a7f9-4519-b97c-efbc4ee05e33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.789916874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e469ae-a7f9-4519-b97c-efbc4ee05e33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 19:58:49 addons-153447 crio[661]: time="2024-12-04 19:58:49.790267092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a771fd51c05d9e34468f562f135ac9d84173c352c762e0cccfba52171f8af1f5,PodSandboxId:3679e5530db3f9787d73e5665442295ce6c42e0e3644b99c9bda5f762e958ac8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733342088454170898,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-f6frm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6800bc7-e04a-4720-a4c3-a48990ab58c5,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2db9417edc3057a7027ed992ae74351bf95af4bb514e806bf9d1d7f2710e4cca,PodSandboxId:9ec8f2880007d88b9fe0c7483773c27292fedf41ea00dddeb86349aa5c3f678c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733342069781912919,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hbf4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753bceff-d6ce-480f-8189-fb5950c00513,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902734e47ab3e3c5aa871cae2ae3163c24e624e07aa7d111ac8bf8c3ffc230f6,PodSandboxId:14391fed3f78ad26b6e6dabae8d685f5031a97b58bd40d0341218fd3c746dd8d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733342069212368991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7j5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af400b19-f03c-4778-b608-e525f19e468c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c9be5be2753465b346f21c37d08daedd2dcafb23c7b77032be80199abf21ab,PodSandboxId:c5e9212b4058098919f070de350fccca2456e86cf140cc73163bd50b43ff4032,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733342024549915990,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fdb24ab-7096-4556-a232-4d26f7552507,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72f
e25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe6445
8a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=62e469ae-a7f9-4519-b97c-efbc4ee05e33 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	99a998b51456d       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   98d88213686a0       nginx
	98a6653243f4c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   494d9fa01ba27       busybox
	a771fd51c05d9       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   3679e5530db3f       ingress-nginx-controller-5f85ff4588-f6frm
	2db9417edc305       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   9ec8f2880007d       ingress-nginx-admission-patch-hbf4j
	902734e47ab3e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   14391fed3f78a       ingress-nginx-admission-create-x7j5f
	c07dfedc1ada6       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   55adec096f978       metrics-server-84c5f94fbc-gpnml
	0581d9d4c53cf       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   6a51138af00ec       amd-gpu-device-plugin-7r8d9
	79c9be5be2753       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   c5e9212b40580       kube-ingress-dns-minikube
	391f85cfe8644       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   2d3fcf3574200       storage-provisioner
	fbdb1435874e2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   78dcffa5b119f       coredns-7c65d6cfc9-mq69t
	9fb3e6fbdfcdc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   66fd6ef2f8a7f       kube-proxy-zf92b
	0968f9cd07b6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   a82f6435852f1       etcd-addons-153447
	58bddd2348673       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   51e0888cf7dc0       kube-scheduler-addons-153447
	03d71ccb7c47f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   52a32abb1b2e4       kube-controller-manager-addons-153447
	ed3ce6a0cfea9       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   b645c9e242328       kube-apiserver-addons-153447
	
	
	==> coredns [fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33] <==
	[INFO] 10.244.0.7:38932 - 56401 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00011955s
	[INFO] 10.244.0.7:38932 - 17539 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000073466s
	[INFO] 10.244.0.7:38932 - 3092 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085206s
	[INFO] 10.244.0.7:38932 - 16339 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000091418s
	[INFO] 10.244.0.7:38932 - 19140 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105152s
	[INFO] 10.244.0.7:38932 - 57162 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114016s
	[INFO] 10.244.0.7:38932 - 3997 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000125361s
	[INFO] 10.244.0.7:36218 - 24682 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183463s
	[INFO] 10.244.0.7:36218 - 24960 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079844s
	[INFO] 10.244.0.7:55947 - 49216 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009323s
	[INFO] 10.244.0.7:55947 - 49435 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074201s
	[INFO] 10.244.0.7:38388 - 55460 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083852s
	[INFO] 10.244.0.7:38388 - 55660 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000128094s
	[INFO] 10.244.0.7:40853 - 16087 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006819s
	[INFO] 10.244.0.7:40853 - 15842 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056782s
	[INFO] 10.244.0.23:33241 - 63294 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000919567s
	[INFO] 10.244.0.23:52655 - 2035 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0013234s
	[INFO] 10.244.0.23:41938 - 26013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185354s
	[INFO] 10.244.0.23:49244 - 37294 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125066s
	[INFO] 10.244.0.23:44115 - 36286 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096264s
	[INFO] 10.244.0.23:52974 - 14410 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001695s
	[INFO] 10.244.0.23:52364 - 27030 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000994622s
	[INFO] 10.244.0.23:58274 - 24606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001429256s
	[INFO] 10.244.0.28:43069 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000416015s
	[INFO] 10.244.0.28:43303 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000257859s
	
	
	==> describe nodes <==
	Name:               addons-153447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-153447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=addons-153447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T19_53_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153447
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 19:53:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153447
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 19:58:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 19:57:09 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 19:57:09 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 19:57:09 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 19:57:09 +0000   Wed, 04 Dec 2024 19:53:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-153447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e0a87a584543abac0cd1c84dc4aae2
	  System UUID:                96e0a87a-5845-43ab-ac0c-d1c84dc4aae2
	  Boot ID:                    f46eeac8-29cc-4f15-9c3f-b9f5c9897c18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     hello-world-app-55bf9c44b4-lvjlq             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-f6frm    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m14s
	  kube-system                 amd-gpu-device-plugin-7r8d9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 coredns-7c65d6cfc9-mq69t                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m21s
	  kube-system                 etcd-addons-153447                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m27s
	  kube-system                 kube-apiserver-addons-153447                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-addons-153447        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-zf92b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-addons-153447                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 metrics-server-84c5f94fbc-gpnml              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m17s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node addons-153447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node addons-153447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node addons-153447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m27s                  kubelet          Node addons-153447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s                  kubelet          Node addons-153447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s                  kubelet          Node addons-153447 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m26s                  kubelet          Node addons-153447 status is now: NodeReady
	  Normal  RegisteredNode           5m22s                  node-controller  Node addons-153447 event: Registered Node addons-153447 in Controller
	
	
	==> dmesg <==
	[  +0.075537] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.801267] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +1.185893] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.005668] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.014799] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.230337] kauditd_printk_skb: 77 callbacks suppressed
	[Dec 4 19:54] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.039477] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.511486] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.086826] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.229832] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.758415] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.098105] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.147270] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.296493] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 4 19:55] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.786172] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.905242] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.667032] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.404741] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 4 19:56] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.763557] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.881895] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.238790] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.382791] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41] <==
	{"level":"warn","ts":"2024-12-04T19:55:58.624220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"440.175697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-12-04T19:55:58.624235Z","caller":"traceutil/trace.go:171","msg":"trace[378325215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1545; }","duration":"440.189256ms","start":"2024-12-04T19:55:58.184041Z","end":"2024-12-04T19:55:58.624230Z","steps":["trace[378325215] 'agreement among raft nodes before linearized reading'  (duration: 440.122944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624248Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.184002Z","time spent":"440.242257ms","remote":"127.0.0.1:57812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.381134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:55:58.624518Z","caller":"traceutil/trace.go:171","msg":"trace[893166183] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:1545; }","duration":"358.403732ms","start":"2024-12-04T19:55:58.266107Z","end":"2024-12-04T19:55:58.624511Z","steps":["trace[893166183] 'agreement among raft nodes before linearized reading'  (duration: 358.369968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.266069Z","time spent":"358.471066ms","remote":"127.0.0.1:57768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":85,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.523255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:55:58.624663Z","caller":"traceutil/trace.go:171","msg":"trace[345604867] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1545; }","duration":"386.538798ms","start":"2024-12-04T19:55:58.238120Z","end":"2024-12-04T19:55:58.624658Z","steps":["trace[345604867] 'agreement among raft nodes before linearized reading'  (duration: 386.514273ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.238074Z","time spent":"386.600035ms","remote":"127.0.0.1:57584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.943353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-04T19:55:58.624802Z","caller":"traceutil/trace.go:171","msg":"trace[21098876] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1545; }","duration":"416.961539ms","start":"2024-12-04T19:55:58.207836Z","end":"2024-12-04T19:55:58.624797Z","steps":["trace[21098876] 'agreement among raft nodes before linearized reading'  (duration: 416.894676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624814Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.207787Z","time spent":"417.02409ms","remote":"127.0.0.1:57812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"info","ts":"2024-12-04T19:56:23.104048Z","caller":"traceutil/trace.go:171","msg":"trace[1313973066] linearizableReadLoop","detail":"{readStateIndex:1851; appliedIndex:1850; }","duration":"102.314883ms","start":"2024-12-04T19:56:23.001718Z","end":"2024-12-04T19:56:23.104033Z","steps":["trace[1313973066] 'read index received'  (duration: 102.194335ms)","trace[1313973066] 'applied index is now lower than readState.Index'  (duration: 119.944µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T19:56:23.104557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.822597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-153447\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2024-12-04T19:56:23.104588Z","caller":"traceutil/trace.go:171","msg":"trace[547629556] range","detail":"{range_begin:/registry/csinodes/addons-153447; range_end:; response_count:1; response_revision:1786; }","duration":"102.864494ms","start":"2024-12-04T19:56:23.001715Z","end":"2024-12-04T19:56:23.104579Z","steps":["trace[547629556] 'agreement among raft nodes before linearized reading'  (duration: 102.735206ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.313638Z","caller":"traceutil/trace.go:171","msg":"trace[636958666] transaction","detail":"{read_only:false; response_revision:1787; number_of_response:1; }","duration":"207.432244ms","start":"2024-12-04T19:56:23.106187Z","end":"2024-12-04T19:56:23.313619Z","steps":["trace[636958666] 'process raft request'  (duration: 201.205601ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.323930Z","caller":"traceutil/trace.go:171","msg":"trace[430165809] transaction","detail":"{read_only:false; response_revision:1788; number_of_response:1; }","duration":"215.568698ms","start":"2024-12-04T19:56:23.108353Z","end":"2024-12-04T19:56:23.323921Z","steps":["trace[430165809] 'process raft request'  (duration: 205.21506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:56:23.327045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.858924ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:56:23.327089Z","caller":"traceutil/trace.go:171","msg":"trace[1611073847] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1788; }","duration":"212.90997ms","start":"2024-12-04T19:56:23.114169Z","end":"2024-12-04T19:56:23.327079Z","steps":["trace[1611073847] 'agreement among raft nodes before linearized reading'  (duration: 212.802582ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.326919Z","caller":"traceutil/trace.go:171","msg":"trace[1636379888] linearizableReadLoop","detail":"{readStateIndex:1853; appliedIndex:1851; }","duration":"209.661667ms","start":"2024-12-04T19:56:23.114175Z","end":"2024-12-04T19:56:23.323837Z","steps":["trace[1636379888] 'read index received'  (duration: 193.225848ms)","trace[1636379888] 'applied index is now lower than readState.Index'  (duration: 16.434772ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T19:56:23.328493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.923746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-12-04T19:56:23.328528Z","caller":"traceutil/trace.go:171","msg":"trace[684763124] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-attacher-0; range_end:; response_count:1; response_revision:1788; }","duration":"202.97352ms","start":"2024-12-04T19:56:23.125542Z","end":"2024-12-04T19:56:23.328516Z","steps":["trace[684763124] 'agreement among raft nodes before linearized reading'  (duration: 202.843619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:56:23.328739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.816203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-health-monitor-controller-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:56:23.328757Z","caller":"traceutil/trace.go:171","msg":"trace[1614675955] range","detail":"{range_begin:/registry/clusterroles/external-health-monitor-controller-runner; range_end:; response_count:0; response_revision:1788; }","duration":"149.837741ms","start":"2024-12-04T19:56:23.178913Z","end":"2024-12-04T19:56:23.328751Z","steps":["trace[1614675955] 'agreement among raft nodes before linearized reading'  (duration: 149.805155ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:57:04.123669Z","caller":"traceutil/trace.go:171","msg":"trace[629270860] transaction","detail":"{read_only:false; response_revision:1872; number_of_response:1; }","duration":"130.985911ms","start":"2024-12-04T19:57:03.992648Z","end":"2024-12-04T19:57:04.123634Z","steps":["trace[629270860] 'process raft request'  (duration: 130.56555ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:58:50 up 5 min,  0 users,  load average: 0.19, 0.82, 0.48
	Linux addons-153447 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9] <==
	I1204 19:55:24.408065       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1204 19:55:43.247635       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1204 19:55:43.256706       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1204 19:55:43.264429       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1204 19:55:57.061883       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.13.6"}
	E1204 19:55:58.626936       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1204 19:56:06.898018       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1204 19:56:20.960351       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:20.960435       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:20.981151       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:20.981267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.009622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.009724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.048318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.048767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.118524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.118619       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1204 19:56:22.049481       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1204 19:56:22.122720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1204 19:56:22.130521       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1204 19:56:22.651035       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 19:56:23.697731       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1204 19:56:26.037839       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 19:56:26.208364       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.14.133"}
	I1204 19:58:48.684446       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.155.118"}
	
	
	==> kube-controller-manager [03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317] <==
	W1204 19:57:03.472773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:57:03.472829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 19:57:09.057320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-153447"
	W1204 19:57:22.501541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:57:22.501757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:57:31.154813       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:57:31.154926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:57:32.988369       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:57:32.988486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:57:37.517720       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:57:37.517778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:58:11.410105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:58:11.410254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:58:19.880065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:58:19.880135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:58:31.521175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:58:31.521262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:58:34.517858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:58:34.517923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:58:47.980485       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:58:47.980675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 19:58:48.509602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="26.56867ms"
	I1204 19:58:48.526222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.130849ms"
	I1204 19:58:48.544714       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.354865ms"
	I1204 19:58:48.544793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.218µs"
	
	
	==> kube-proxy [9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 19:53:32.015228       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 19:53:32.046237       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E1204 19:53:32.046346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 19:53:32.383994       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 19:53:32.384077       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 19:53:32.384108       1 server_linux.go:169] "Using iptables Proxier"
	I1204 19:53:32.424526       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 19:53:32.424833       1 server.go:483] "Version info" version="v1.31.2"
	I1204 19:53:32.424859       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 19:53:32.467227       1 config.go:199] "Starting service config controller"
	I1204 19:53:32.467259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 19:53:32.467342       1 config.go:105] "Starting endpoint slice config controller"
	I1204 19:53:32.467348       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 19:53:32.467693       1 config.go:328] "Starting node config controller"
	I1204 19:53:32.467704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 19:53:32.569386       1 shared_informer.go:320] Caches are synced for node config
	I1204 19:53:32.569453       1 shared_informer.go:320] Caches are synced for service config
	I1204 19:53:32.569499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3] <==
	W1204 19:53:21.338704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:21.338729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:21.338788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 19:53:21.338814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:21.338864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:21.338889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.149716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 19:53:22.149769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.161177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.161360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.251420       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 19:53:22.251469       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 19:53:22.298007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 19:53:22.298062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.339430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1204 19:53:22.339483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.415479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 19:53:22.415531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.489679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.489723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.558149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.558249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.616848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 19:53:22.616950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 19:53:24.730858       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 19:57:34 addons-153447 kubelet[1201]: E1204 19:57:34.101189    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342254095909947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:57:35 addons-153447 kubelet[1201]: I1204 19:57:35.874828    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-mq69t" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 19:57:44 addons-153447 kubelet[1201]: E1204 19:57:44.102985    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342264102755917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:57:44 addons-153447 kubelet[1201]: E1204 19:57:44.103017    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342264102755917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:57:54 addons-153447 kubelet[1201]: E1204 19:57:54.107318    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342274106826891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:57:54 addons-153447 kubelet[1201]: E1204 19:57:54.107358    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342274106826891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:04 addons-153447 kubelet[1201]: E1204 19:58:04.110537    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342284110094983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:04 addons-153447 kubelet[1201]: E1204 19:58:04.110575    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342284110094983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:14 addons-153447 kubelet[1201]: E1204 19:58:14.113501    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342294112904383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:14 addons-153447 kubelet[1201]: E1204 19:58:14.114178    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342294112904383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:23 addons-153447 kubelet[1201]: E1204 19:58:23.911234    1201 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 19:58:23 addons-153447 kubelet[1201]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 19:58:23 addons-153447 kubelet[1201]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 19:58:23 addons-153447 kubelet[1201]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 19:58:23 addons-153447 kubelet[1201]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 19:58:24 addons-153447 kubelet[1201]: E1204 19:58:24.116839    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342304116520068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:24 addons-153447 kubelet[1201]: E1204 19:58:24.116862    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342304116520068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:32 addons-153447 kubelet[1201]: I1204 19:58:32.874423    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7r8d9" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 19:58:34 addons-153447 kubelet[1201]: E1204 19:58:34.119183    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342314118841176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:34 addons-153447 kubelet[1201]: E1204 19:58:34.119221    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342314118841176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:38 addons-153447 kubelet[1201]: I1204 19:58:38.874438    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 19:58:44 addons-153447 kubelet[1201]: E1204 19:58:44.121939    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342324121336558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:44 addons-153447 kubelet[1201]: E1204 19:58:44.122026    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342324121336558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595901,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 19:58:48 addons-153447 kubelet[1201]: I1204 19:58:48.512435    1201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=138.973076728 podStartE2EDuration="2m22.512406335s" podCreationTimestamp="2024-12-04 19:56:26 +0000 UTC" firstStartedPulling="2024-12-04 19:56:26.687613759 +0000 UTC m=+182.957496283" lastFinishedPulling="2024-12-04 19:56:30.226943363 +0000 UTC m=+186.496825890" observedRunningTime="2024-12-04 19:56:31.275929381 +0000 UTC m=+187.545811925" watchObservedRunningTime="2024-12-04 19:58:48.512406335 +0000 UTC m=+324.782288861"
	Dec 04 19:58:48 addons-153447 kubelet[1201]: I1204 19:58:48.545967    1201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mbc\" (UniqueName: \"kubernetes.io/projected/3d0d0101-4798-4b24-83fe-19eb2feea818-kube-api-access-t2mbc\") pod \"hello-world-app-55bf9c44b4-lvjlq\" (UID: \"3d0d0101-4798-4b24-83fe-19eb2feea818\") " pod="default/hello-world-app-55bf9c44b4-lvjlq"
	
	
	==> storage-provisioner [391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa] <==
	I1204 19:53:35.729036       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 19:53:35.752925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 19:53:35.753000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 19:53:35.768392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 19:53:35.768529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f!
	I1204 19:53:35.769517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6df663f4-aede-41c0-9e65-236b97d0f25b", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f became leader
	I1204 19:53:35.869508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153447 -n addons-153447
helpers_test.go:261: (dbg) Run:  kubectl --context addons-153447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-lvjlq ingress-nginx-admission-create-x7j5f ingress-nginx-admission-patch-hbf4j
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-153447 describe pod hello-world-app-55bf9c44b4-lvjlq ingress-nginx-admission-create-x7j5f ingress-nginx-admission-patch-hbf4j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-153447 describe pod hello-world-app-55bf9c44b4-lvjlq ingress-nginx-admission-create-x7j5f ingress-nginx-admission-patch-hbf4j: exit status 1 (65.407967ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-lvjlq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-153447/192.168.39.11
	Start Time:       Wed, 04 Dec 2024 19:58:48 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t2mbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t2mbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-lvjlq to addons-153447
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-x7j5f" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hbf4j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-153447 describe pod hello-world-app-55bf9c44b4-lvjlq ingress-nginx-admission-create-x7j5f ingress-nginx-admission-patch-hbf4j: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable ingress-dns --alsologtostderr -v=1: (1.048129847s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable ingress --alsologtostderr -v=1: (7.65996167s)
--- FAIL: TestAddons/parallel/Ingress (153.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (332.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.134807ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gpnml" [3e5584b2-5c1f-4acb-93d3-614ecdb4794c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004394648s
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (71.124584ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m26.09280369s

                                                
                                                
** /stderr **
I1204 19:55:56.094596   17743 retry.go:31] will retry after 3.446613438s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (66.614111ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m29.607016532s

                                                
                                                
** /stderr **
I1204 19:55:59.608796   17743 retry.go:31] will retry after 5.97896665s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (63.485498ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m35.650169231s

                                                
                                                
** /stderr **
I1204 19:56:05.651949   17743 retry.go:31] will retry after 3.434519902s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (122.81546ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m39.20846415s

                                                
                                                
** /stderr **
I1204 19:56:09.210140   17743 retry.go:31] will retry after 7.020441518s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (64.80584ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m46.294414218s

                                                
                                                
** /stderr **
I1204 19:56:16.296148   17743 retry.go:31] will retry after 12.841477223s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (82.063432ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 2m59.217557512s

                                                
                                                
** /stderr **
I1204 19:56:29.220174   17743 retry.go:31] will retry after 19.475677349s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (62.405974ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 3m18.762124777s

                                                
                                                
** /stderr **
I1204 19:56:48.763955   17743 retry.go:31] will retry after 39.083549224s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (66.554126ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 3m57.916392374s

                                                
                                                
** /stderr **
I1204 19:57:27.918190   17743 retry.go:31] will retry after 51.484858923s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (63.166963ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 4m49.470987772s

                                                
                                                
** /stderr **
I1204 19:58:19.472674   17743 retry.go:31] will retry after 1m5.581433971s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (64.371758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 5m55.118473689s

                                                
                                                
** /stderr **
I1204 19:59:25.120412   17743 retry.go:31] will retry after 52.226205806s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (62.576237ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 6m47.410501595s

                                                
                                                
** /stderr **
I1204 20:00:17.412420   17743 retry.go:31] will retry after 1m3.381818133s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-153447 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-153447 top pods -n kube-system: exit status 1 (65.935391ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-7r8d9, age: 7m50.865757559s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-153447 -n addons-153447
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 logs -n 25: (1.234620077s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-833018                                                                     | download-only-833018 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| delete  | -p download-only-079944                                                                     | download-only-079944 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-214166 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | binary-mirror-214166                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43213                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-214166                                                                     | binary-mirror-214166 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | addons-153447                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | addons-153447                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-153447 --wait=true                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:54 UTC | 04 Dec 24 19:54 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-153447 ssh cat                                                                       | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | /opt/local-path-provisioner/pvc-753cdf45-d6df-4271-9413-533dc1761312_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-153447 ip                                                                            | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:55 UTC | 04 Dec 24 19:55 UTC |
	|         | -p addons-153447                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-153447 addons                                                                        | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC | 04 Dec 24 19:56 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-153447 ssh curl -s                                                                   | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-153447 ip                                                                            | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:58 UTC | 04 Dec 24 19:58 UTC |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:58 UTC | 04 Dec 24 19:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-153447 addons disable                                                                | addons-153447        | jenkins | v1.34.0 | 04 Dec 24 19:58 UTC | 04 Dec 24 19:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 19:52:46
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 19:52:46.271292   18382 out.go:345] Setting OutFile to fd 1 ...
	I1204 19:52:46.271438   18382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:46.271448   18382 out.go:358] Setting ErrFile to fd 2...
	I1204 19:52:46.271453   18382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:46.271635   18382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 19:52:46.272228   18382 out.go:352] Setting JSON to false
	I1204 19:52:46.273037   18382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2116,"bootTime":1733339850,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 19:52:46.273139   18382 start.go:139] virtualization: kvm guest
	I1204 19:52:46.275218   18382 out.go:177] * [addons-153447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 19:52:46.276477   18382 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 19:52:46.276483   18382 notify.go:220] Checking for updates...
	I1204 19:52:46.277641   18382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 19:52:46.278788   18382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:52:46.279951   18382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:46.281121   18382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 19:52:46.282202   18382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 19:52:46.283537   18382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 19:52:46.316111   18382 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 19:52:46.317187   18382 start.go:297] selected driver: kvm2
	I1204 19:52:46.317199   18382 start.go:901] validating driver "kvm2" against <nil>
	I1204 19:52:46.317209   18382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 19:52:46.317876   18382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:46.317947   18382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 19:52:46.332219   18382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 19:52:46.332270   18382 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 19:52:46.332545   18382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 19:52:46.332575   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:52:46.332612   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:52:46.332620   18382 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 19:52:46.332662   18382 start.go:340] cluster config:
	{Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 19:52:46.332753   18382 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:46.334386   18382 out.go:177] * Starting "addons-153447" primary control-plane node in "addons-153447" cluster
	I1204 19:52:46.335735   18382 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 19:52:46.335771   18382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 19:52:46.335780   18382 cache.go:56] Caching tarball of preloaded images
	I1204 19:52:46.335849   18382 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 19:52:46.335859   18382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 19:52:46.336145   18382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json ...
	I1204 19:52:46.336164   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json: {Name:mk74fe767c26e98e973ca64c19eab9a9a25d2dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:52:46.336275   18382 start.go:360] acquireMachinesLock for addons-153447: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 19:52:46.336317   18382 start.go:364] duration metric: took 30.06µs to acquireMachinesLock for "addons-153447"
	I1204 19:52:46.336334   18382 start.go:93] Provisioning new machine with config: &{Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 19:52:46.336383   18382 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 19:52:46.338364   18382 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1204 19:52:46.338505   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:52:46.338546   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:52:46.352238   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I1204 19:52:46.352736   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:52:46.353273   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:52:46.353294   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:52:46.353664   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:52:46.353860   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:52:46.354017   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:52:46.354151   18382 start.go:159] libmachine.API.Create for "addons-153447" (driver="kvm2")
	I1204 19:52:46.354222   18382 client.go:168] LocalClient.Create starting
	I1204 19:52:46.354258   18382 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 19:52:46.466800   18382 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 19:52:46.734150   18382 main.go:141] libmachine: Running pre-create checks...
	I1204 19:52:46.734181   18382 main.go:141] libmachine: (addons-153447) Calling .PreCreateCheck
	I1204 19:52:46.734684   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:52:46.735098   18382 main.go:141] libmachine: Creating machine...
	I1204 19:52:46.735113   18382 main.go:141] libmachine: (addons-153447) Calling .Create
	I1204 19:52:46.735310   18382 main.go:141] libmachine: (addons-153447) Creating KVM machine...
	I1204 19:52:46.736450   18382 main.go:141] libmachine: (addons-153447) DBG | found existing default KVM network
	I1204 19:52:46.737145   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:46.737011   18404 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1204 19:52:46.737200   18382 main.go:141] libmachine: (addons-153447) DBG | created network xml: 
	I1204 19:52:46.737219   18382 main.go:141] libmachine: (addons-153447) DBG | <network>
	I1204 19:52:46.737230   18382 main.go:141] libmachine: (addons-153447) DBG |   <name>mk-addons-153447</name>
	I1204 19:52:46.737248   18382 main.go:141] libmachine: (addons-153447) DBG |   <dns enable='no'/>
	I1204 19:52:46.737260   18382 main.go:141] libmachine: (addons-153447) DBG |   
	I1204 19:52:46.737273   18382 main.go:141] libmachine: (addons-153447) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 19:52:46.737286   18382 main.go:141] libmachine: (addons-153447) DBG |     <dhcp>
	I1204 19:52:46.737298   18382 main.go:141] libmachine: (addons-153447) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 19:52:46.737309   18382 main.go:141] libmachine: (addons-153447) DBG |     </dhcp>
	I1204 19:52:46.737319   18382 main.go:141] libmachine: (addons-153447) DBG |   </ip>
	I1204 19:52:46.737328   18382 main.go:141] libmachine: (addons-153447) DBG |   
	I1204 19:52:46.737339   18382 main.go:141] libmachine: (addons-153447) DBG | </network>
	I1204 19:52:46.737352   18382 main.go:141] libmachine: (addons-153447) DBG | 
	I1204 19:52:46.742677   18382 main.go:141] libmachine: (addons-153447) DBG | trying to create private KVM network mk-addons-153447 192.168.39.0/24...
	I1204 19:52:46.805775   18382 main.go:141] libmachine: (addons-153447) DBG | private KVM network mk-addons-153447 192.168.39.0/24 created
	I1204 19:52:46.805812   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:46.805758   18404 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:46.805838   18382 main.go:141] libmachine: (addons-153447) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 ...
	I1204 19:52:46.805858   18382 main.go:141] libmachine: (addons-153447) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 19:52:46.805882   18382 main.go:141] libmachine: (addons-153447) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 19:52:47.068964   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.068807   18404 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa...
	I1204 19:52:47.265987   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.265811   18404 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/addons-153447.rawdisk...
	I1204 19:52:47.266025   18382 main.go:141] libmachine: (addons-153447) DBG | Writing magic tar header
	I1204 19:52:47.266045   18382 main.go:141] libmachine: (addons-153447) DBG | Writing SSH key tar header
	I1204 19:52:47.266056   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:47.265968   18404 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 ...
	I1204 19:52:47.266105   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447
	I1204 19:52:47.266130   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 19:52:47.266144   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447 (perms=drwx------)
	I1204 19:52:47.266164   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 19:52:47.266174   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 19:52:47.266185   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 19:52:47.266213   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 19:52:47.266238   18382 main.go:141] libmachine: (addons-153447) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 19:52:47.266249   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:47.266260   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 19:52:47.266270   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 19:52:47.266280   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home/jenkins
	I1204 19:52:47.266289   18382 main.go:141] libmachine: (addons-153447) DBG | Checking permissions on dir: /home
	I1204 19:52:47.266296   18382 main.go:141] libmachine: (addons-153447) DBG | Skipping /home - not owner
	I1204 19:52:47.266306   18382 main.go:141] libmachine: (addons-153447) Creating domain...
	I1204 19:52:47.267207   18382 main.go:141] libmachine: (addons-153447) define libvirt domain using xml: 
	I1204 19:52:47.267240   18382 main.go:141] libmachine: (addons-153447) <domain type='kvm'>
	I1204 19:52:47.267251   18382 main.go:141] libmachine: (addons-153447)   <name>addons-153447</name>
	I1204 19:52:47.267267   18382 main.go:141] libmachine: (addons-153447)   <memory unit='MiB'>4000</memory>
	I1204 19:52:47.267276   18382 main.go:141] libmachine: (addons-153447)   <vcpu>2</vcpu>
	I1204 19:52:47.267285   18382 main.go:141] libmachine: (addons-153447)   <features>
	I1204 19:52:47.267294   18382 main.go:141] libmachine: (addons-153447)     <acpi/>
	I1204 19:52:47.267303   18382 main.go:141] libmachine: (addons-153447)     <apic/>
	I1204 19:52:47.267311   18382 main.go:141] libmachine: (addons-153447)     <pae/>
	I1204 19:52:47.267316   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267321   18382 main.go:141] libmachine: (addons-153447)   </features>
	I1204 19:52:47.267326   18382 main.go:141] libmachine: (addons-153447)   <cpu mode='host-passthrough'>
	I1204 19:52:47.267333   18382 main.go:141] libmachine: (addons-153447)   
	I1204 19:52:47.267339   18382 main.go:141] libmachine: (addons-153447)   </cpu>
	I1204 19:52:47.267346   18382 main.go:141] libmachine: (addons-153447)   <os>
	I1204 19:52:47.267351   18382 main.go:141] libmachine: (addons-153447)     <type>hvm</type>
	I1204 19:52:47.267396   18382 main.go:141] libmachine: (addons-153447)     <boot dev='cdrom'/>
	I1204 19:52:47.267420   18382 main.go:141] libmachine: (addons-153447)     <boot dev='hd'/>
	I1204 19:52:47.267430   18382 main.go:141] libmachine: (addons-153447)     <bootmenu enable='no'/>
	I1204 19:52:47.267439   18382 main.go:141] libmachine: (addons-153447)   </os>
	I1204 19:52:47.267447   18382 main.go:141] libmachine: (addons-153447)   <devices>
	I1204 19:52:47.267456   18382 main.go:141] libmachine: (addons-153447)     <disk type='file' device='cdrom'>
	I1204 19:52:47.267470   18382 main.go:141] libmachine: (addons-153447)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/boot2docker.iso'/>
	I1204 19:52:47.267482   18382 main.go:141] libmachine: (addons-153447)       <target dev='hdc' bus='scsi'/>
	I1204 19:52:47.267545   18382 main.go:141] libmachine: (addons-153447)       <readonly/>
	I1204 19:52:47.267569   18382 main.go:141] libmachine: (addons-153447)     </disk>
	I1204 19:52:47.267576   18382 main.go:141] libmachine: (addons-153447)     <disk type='file' device='disk'>
	I1204 19:52:47.267583   18382 main.go:141] libmachine: (addons-153447)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 19:52:47.267593   18382 main.go:141] libmachine: (addons-153447)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/addons-153447.rawdisk'/>
	I1204 19:52:47.267600   18382 main.go:141] libmachine: (addons-153447)       <target dev='hda' bus='virtio'/>
	I1204 19:52:47.267606   18382 main.go:141] libmachine: (addons-153447)     </disk>
	I1204 19:52:47.267613   18382 main.go:141] libmachine: (addons-153447)     <interface type='network'>
	I1204 19:52:47.267619   18382 main.go:141] libmachine: (addons-153447)       <source network='mk-addons-153447'/>
	I1204 19:52:47.267625   18382 main.go:141] libmachine: (addons-153447)       <model type='virtio'/>
	I1204 19:52:47.267630   18382 main.go:141] libmachine: (addons-153447)     </interface>
	I1204 19:52:47.267635   18382 main.go:141] libmachine: (addons-153447)     <interface type='network'>
	I1204 19:52:47.267641   18382 main.go:141] libmachine: (addons-153447)       <source network='default'/>
	I1204 19:52:47.267647   18382 main.go:141] libmachine: (addons-153447)       <model type='virtio'/>
	I1204 19:52:47.267660   18382 main.go:141] libmachine: (addons-153447)     </interface>
	I1204 19:52:47.267672   18382 main.go:141] libmachine: (addons-153447)     <serial type='pty'>
	I1204 19:52:47.267681   18382 main.go:141] libmachine: (addons-153447)       <target port='0'/>
	I1204 19:52:47.267690   18382 main.go:141] libmachine: (addons-153447)     </serial>
	I1204 19:52:47.267699   18382 main.go:141] libmachine: (addons-153447)     <console type='pty'>
	I1204 19:52:47.267714   18382 main.go:141] libmachine: (addons-153447)       <target type='serial' port='0'/>
	I1204 19:52:47.267727   18382 main.go:141] libmachine: (addons-153447)     </console>
	I1204 19:52:47.267738   18382 main.go:141] libmachine: (addons-153447)     <rng model='virtio'>
	I1204 19:52:47.267751   18382 main.go:141] libmachine: (addons-153447)       <backend model='random'>/dev/random</backend>
	I1204 19:52:47.267762   18382 main.go:141] libmachine: (addons-153447)     </rng>
	I1204 19:52:47.267773   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267778   18382 main.go:141] libmachine: (addons-153447)     
	I1204 19:52:47.267786   18382 main.go:141] libmachine: (addons-153447)   </devices>
	I1204 19:52:47.267796   18382 main.go:141] libmachine: (addons-153447) </domain>
	I1204 19:52:47.267802   18382 main.go:141] libmachine: (addons-153447) 
	I1204 19:52:47.273713   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:67:c5:84 in network default
	I1204 19:52:47.274172   18382 main.go:141] libmachine: (addons-153447) Ensuring networks are active...
	I1204 19:52:47.274194   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:47.274801   18382 main.go:141] libmachine: (addons-153447) Ensuring network default is active
	I1204 19:52:47.275151   18382 main.go:141] libmachine: (addons-153447) Ensuring network mk-addons-153447 is active
	I1204 19:52:47.275788   18382 main.go:141] libmachine: (addons-153447) Getting domain xml...
	I1204 19:52:47.276511   18382 main.go:141] libmachine: (addons-153447) Creating domain...
	I1204 19:52:48.677064   18382 main.go:141] libmachine: (addons-153447) Waiting to get IP...
	I1204 19:52:48.677954   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:48.678355   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:48.678382   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:48.678316   18404 retry.go:31] will retry after 220.610561ms: waiting for machine to come up
	I1204 19:52:48.900700   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:48.901137   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:48.901158   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:48.901108   18404 retry.go:31] will retry after 253.032712ms: waiting for machine to come up
	I1204 19:52:49.155327   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.155667   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.155707   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.155644   18404 retry.go:31] will retry after 305.740588ms: waiting for machine to come up
	I1204 19:52:49.463331   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.463877   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.463898   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.463842   18404 retry.go:31] will retry after 387.143331ms: waiting for machine to come up
	I1204 19:52:49.852222   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:49.852653   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:49.852684   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:49.852592   18404 retry.go:31] will retry after 582.426176ms: waiting for machine to come up
	I1204 19:52:50.436277   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:50.436736   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:50.436768   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:50.436676   18404 retry.go:31] will retry after 748.274759ms: waiting for machine to come up
	I1204 19:52:51.186077   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:51.186537   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:51.186575   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:51.186499   18404 retry.go:31] will retry after 956.999473ms: waiting for machine to come up
	I1204 19:52:52.145482   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:52.145876   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:52.145911   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:52.145818   18404 retry.go:31] will retry after 1.355766127s: waiting for machine to come up
	I1204 19:52:53.502894   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:53.503400   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:53.503427   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:53.503344   18404 retry.go:31] will retry after 1.611102605s: waiting for machine to come up
	I1204 19:52:55.117027   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:55.117459   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:55.117483   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:55.117409   18404 retry.go:31] will retry after 2.220438115s: waiting for machine to come up
	I1204 19:52:57.339784   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:52:57.340272   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:52:57.340305   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:52:57.340212   18404 retry.go:31] will retry after 2.81848192s: waiting for machine to come up
	I1204 19:53:00.159900   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:00.160301   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:53:00.160330   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:53:00.160279   18404 retry.go:31] will retry after 3.554617985s: waiting for machine to come up
	I1204 19:53:03.717404   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:03.717809   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find current IP address of domain addons-153447 in network mk-addons-153447
	I1204 19:53:03.717836   18382 main.go:141] libmachine: (addons-153447) DBG | I1204 19:53:03.717785   18404 retry.go:31] will retry after 3.395715903s: waiting for machine to come up
	I1204 19:53:07.114926   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.115414   18382 main.go:141] libmachine: (addons-153447) Found IP for machine: 192.168.39.11
	I1204 19:53:07.115434   18382 main.go:141] libmachine: (addons-153447) Reserving static IP address...
	I1204 19:53:07.115445   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has current primary IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.115776   18382 main.go:141] libmachine: (addons-153447) DBG | unable to find host DHCP lease matching {name: "addons-153447", mac: "52:54:00:39:ce:2c", ip: "192.168.39.11"} in network mk-addons-153447
	I1204 19:53:07.283246   18382 main.go:141] libmachine: (addons-153447) DBG | Getting to WaitForSSH function...
	I1204 19:53:07.283280   18382 main.go:141] libmachine: (addons-153447) Reserved static IP address: 192.168.39.11
	I1204 19:53:07.283294   18382 main.go:141] libmachine: (addons-153447) Waiting for SSH to be available...
	I1204 19:53:07.285798   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.286231   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.286261   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.286573   18382 main.go:141] libmachine: (addons-153447) DBG | Using SSH client type: external
	I1204 19:53:07.286588   18382 main.go:141] libmachine: (addons-153447) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa (-rw-------)
	I1204 19:53:07.286607   18382 main.go:141] libmachine: (addons-153447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 19:53:07.286615   18382 main.go:141] libmachine: (addons-153447) DBG | About to run SSH command:
	I1204 19:53:07.286625   18382 main.go:141] libmachine: (addons-153447) DBG | exit 0
	I1204 19:53:07.419447   18382 main.go:141] libmachine: (addons-153447) DBG | SSH cmd err, output: <nil>: 
	I1204 19:53:07.419713   18382 main.go:141] libmachine: (addons-153447) KVM machine creation complete!
	I1204 19:53:07.420082   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:53:07.426416   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:07.426639   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:07.426807   18382 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 19:53:07.426823   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:07.427988   18382 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 19:53:07.428003   18382 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 19:53:07.428011   18382 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 19:53:07.428019   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.430003   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.430401   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.430421   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.430570   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.430736   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.430884   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.431042   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.431228   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.431445   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.431460   18382 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 19:53:07.538357   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 19:53:07.538382   18382 main.go:141] libmachine: Detecting the provisioner...
	I1204 19:53:07.538392   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.541199   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.541527   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.541553   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.541698   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.541875   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.542062   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.542219   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.542373   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.542566   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.542578   18382 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 19:53:07.647935   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 19:53:07.648023   18382 main.go:141] libmachine: found compatible host: buildroot
	I1204 19:53:07.648034   18382 main.go:141] libmachine: Provisioning with buildroot...
	I1204 19:53:07.648041   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.648289   18382 buildroot.go:166] provisioning hostname "addons-153447"
	I1204 19:53:07.648309   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.648469   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.650978   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.651336   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.651367   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.651542   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.651720   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.651903   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.652047   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.652216   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.652387   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.652404   18382 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-153447 && echo "addons-153447" | sudo tee /etc/hostname
	I1204 19:53:07.772832   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-153447
	
	I1204 19:53:07.772857   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.775354   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.775668   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.775696   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.775879   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:07.776022   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.776187   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:07.776330   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:07.776488   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:07.776659   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:07.776675   18382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153447/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 19:53:07.891048   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 19:53:07.891083   18382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 19:53:07.891137   18382 buildroot.go:174] setting up certificates
	I1204 19:53:07.891157   18382 provision.go:84] configureAuth start
	I1204 19:53:07.891174   18382 main.go:141] libmachine: (addons-153447) Calling .GetMachineName
	I1204 19:53:07.891447   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:07.894138   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.894446   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.894471   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.894631   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:07.896867   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.897245   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:07.897272   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:07.897443   18382 provision.go:143] copyHostCerts
	I1204 19:53:07.897523   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 19:53:07.897659   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 19:53:07.897741   18382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 19:53:07.897811   18382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.addons-153447 san=[127.0.0.1 192.168.39.11 addons-153447 localhost minikube]
	I1204 19:53:08.021702   18382 provision.go:177] copyRemoteCerts
	I1204 19:53:08.021779   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 19:53:08.021808   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.024316   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.024626   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.024652   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.024834   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.025007   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.025132   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.025246   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.108792   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 19:53:08.131717   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 19:53:08.153700   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 19:53:08.174745   18382 provision.go:87] duration metric: took 283.573935ms to configureAuth
	I1204 19:53:08.174770   18382 buildroot.go:189] setting minikube options for container-runtime
	I1204 19:53:08.174929   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:08.175010   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.177445   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.177751   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.177777   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.177907   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.178081   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.178215   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.178330   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.178454   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:08.178690   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:08.178709   18382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 19:53:08.408937   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 19:53:08.408970   18382 main.go:141] libmachine: Checking connection to Docker...
	I1204 19:53:08.408992   18382 main.go:141] libmachine: (addons-153447) Calling .GetURL
	I1204 19:53:08.410371   18382 main.go:141] libmachine: (addons-153447) DBG | Using libvirt version 6000000
	I1204 19:53:08.412390   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.412691   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.412714   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.412839   18382 main.go:141] libmachine: Docker is up and running!
	I1204 19:53:08.412852   18382 main.go:141] libmachine: Reticulating splines...
	I1204 19:53:08.412921   18382 client.go:171] duration metric: took 22.058627861s to LocalClient.Create
	I1204 19:53:08.412958   18382 start.go:167] duration metric: took 22.058809655s to libmachine.API.Create "addons-153447"
	I1204 19:53:08.412977   18382 start.go:293] postStartSetup for "addons-153447" (driver="kvm2")
	I1204 19:53:08.412992   18382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 19:53:08.413014   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.413282   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 19:53:08.413305   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.415344   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.415731   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.415749   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.415900   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.416054   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.416206   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.416317   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.498234   18382 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 19:53:08.502169   18382 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 19:53:08.502191   18382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 19:53:08.502248   18382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 19:53:08.502271   18382 start.go:296] duration metric: took 89.286654ms for postStartSetup
	I1204 19:53:08.502301   18382 main.go:141] libmachine: (addons-153447) Calling .GetConfigRaw
	I1204 19:53:08.502792   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:08.505073   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.505451   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.505465   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.505680   18382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/config.json ...
	I1204 19:53:08.505852   18382 start.go:128] duration metric: took 22.169460096s to createHost
	I1204 19:53:08.505873   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.507934   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.508266   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.508296   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.508425   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.508606   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.508720   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.508850   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.508964   18382 main.go:141] libmachine: Using SSH client type: native
	I1204 19:53:08.509103   18382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1204 19:53:08.509119   18382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 19:53:08.615973   18382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733341988.586881895
	
	I1204 19:53:08.615999   18382 fix.go:216] guest clock: 1733341988.586881895
	I1204 19:53:08.616008   18382 fix.go:229] Guest: 2024-12-04 19:53:08.586881895 +0000 UTC Remote: 2024-12-04 19:53:08.505863098 +0000 UTC m=+22.270733940 (delta=81.018797ms)
	I1204 19:53:08.616051   18382 fix.go:200] guest clock delta is within tolerance: 81.018797ms
	I1204 19:53:08.616057   18382 start.go:83] releasing machines lock for "addons-153447", held for 22.279731412s
	I1204 19:53:08.616082   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.616317   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:08.619068   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.619337   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.619361   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.619505   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620009   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620157   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:08.620261   18382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 19:53:08.620319   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.620348   18382 ssh_runner.go:195] Run: cat /version.json
	I1204 19:53:08.620370   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:08.622829   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.622856   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623140   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.623169   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623201   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:08.623217   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:08.623261   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.623444   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.623516   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:08.623591   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.623660   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:08.623722   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.623772   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:08.623874   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:08.722699   18382 ssh_runner.go:195] Run: systemctl --version
	I1204 19:53:08.728694   18382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 19:53:08.898480   18382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 19:53:08.904661   18382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 19:53:08.904726   18382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 19:53:08.919600   18382 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 19:53:08.919628   18382 start.go:495] detecting cgroup driver to use...
	I1204 19:53:08.919688   18382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 19:53:08.935972   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 19:53:08.949389   18382 docker.go:217] disabling cri-docker service (if available) ...
	I1204 19:53:08.949471   18382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 19:53:08.963034   18382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 19:53:08.975988   18382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 19:53:09.088160   18382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 19:53:09.221122   18382 docker.go:233] disabling docker service ...
	I1204 19:53:09.221201   18382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 19:53:09.235168   18382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 19:53:09.247641   18382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 19:53:09.386188   18382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 19:53:09.510658   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 19:53:09.524768   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 19:53:09.542553   18382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 19:53:09.542636   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.553256   18382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 19:53:09.553336   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.563562   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.573318   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.583217   18382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 19:53:09.593199   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.603064   18382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.619139   18382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 19:53:09.629164   18382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 19:53:09.637905   18382 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 19:53:09.637962   18382 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 19:53:09.649712   18382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 19:53:09.658710   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:09.769867   18382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 19:53:09.855232   18382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 19:53:09.855349   18382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 19:53:09.860216   18382 start.go:563] Will wait 60s for crictl version
	I1204 19:53:09.860283   18382 ssh_runner.go:195] Run: which crictl
	I1204 19:53:09.863782   18382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 19:53:09.901750   18382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 19:53:09.901869   18382 ssh_runner.go:195] Run: crio --version
	I1204 19:53:09.928042   18382 ssh_runner.go:195] Run: crio --version
	I1204 19:53:09.956945   18382 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 19:53:09.958112   18382 main.go:141] libmachine: (addons-153447) Calling .GetIP
	I1204 19:53:09.961348   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:09.961736   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:09.961764   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:09.961944   18382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 19:53:09.965808   18382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 19:53:09.978196   18382 kubeadm.go:883] updating cluster {Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 19:53:09.978342   18382 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 19:53:09.978391   18382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 19:53:10.008735   18382 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 19:53:10.008809   18382 ssh_runner.go:195] Run: which lz4
	I1204 19:53:10.012532   18382 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 19:53:10.016331   18382 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 19:53:10.016358   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 19:53:11.140810   18382 crio.go:462] duration metric: took 1.128301132s to copy over tarball
	I1204 19:53:11.140879   18382 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 19:53:13.226453   18382 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085537454s)
	I1204 19:53:13.226490   18382 crio.go:469] duration metric: took 2.085648381s to extract the tarball
	I1204 19:53:13.226502   18382 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 19:53:13.262820   18382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 19:53:13.303573   18382 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 19:53:13.303598   18382 cache_images.go:84] Images are preloaded, skipping loading
	I1204 19:53:13.303606   18382 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.2 crio true true} ...
	I1204 19:53:13.303707   18382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 19:53:13.303781   18382 ssh_runner.go:195] Run: crio config
	I1204 19:53:13.346601   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:53:13.346625   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:53:13.346634   18382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 19:53:13.346653   18382 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153447 NodeName:addons-153447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 19:53:13.346764   18382 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 19:53:13.346826   18382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 19:53:13.356610   18382 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 19:53:13.356688   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 19:53:13.365746   18382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 19:53:13.381580   18382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 19:53:13.396858   18382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1204 19:53:13.412317   18382 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I1204 19:53:13.415962   18382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 19:53:13.427359   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:13.560420   18382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 19:53:13.578047   18382 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447 for IP: 192.168.39.11
	I1204 19:53:13.578076   18382 certs.go:194] generating shared ca certs ...
	I1204 19:53:13.578095   18382 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.578248   18382 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 19:53:13.621164   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt ...
	I1204 19:53:13.621189   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt: {Name:mk5e28301d7845db54aad68aa44fc989b4fc862b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.621368   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key ...
	I1204 19:53:13.621381   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key: {Name:mk3eccff5973b34611a3e58cc387103e6760de77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.621488   18382 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 19:53:13.824572   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt ...
	I1204 19:53:13.824600   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt: {Name:mka90f4f3fae60930ae311fa0d6db47c930a21b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.824793   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key ...
	I1204 19:53:13.824808   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key: {Name:mkba8e9a6093318744dc7550f69f125ae2b58894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:13.824902   18382 certs.go:256] generating profile certs ...
	I1204 19:53:13.824956   18382 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key
	I1204 19:53:13.824977   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt with IP's: []
	I1204 19:53:14.024648   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt ...
	I1204 19:53:14.024687   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: {Name:mk4491ff1e36bc2732bcb103a335d60aef8bd189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.024858   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key ...
	I1204 19:53:14.024869   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.key: {Name:mk76ec22598afcca1648bb9b1e52f4356aae8867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.024939   18382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934
	I1204 19:53:14.024956   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I1204 19:53:14.114481   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 ...
	I1204 19:53:14.114518   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934: {Name:mke7313468ca05545af3e6cd0fb64128caa62c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.114693   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934 ...
	I1204 19:53:14.114707   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934: {Name:mk55ad4aa8e3b4d1099e160b1f9c20f2efccb6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.114783   18382 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt.d7f96934 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt
	I1204 19:53:14.114862   18382 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key.d7f96934 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key
	I1204 19:53:14.114915   18382 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key
	I1204 19:53:14.114935   18382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt with IP's: []
	I1204 19:53:14.222659   18382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt ...
	I1204 19:53:14.222693   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt: {Name:mk6758753d68639fe71b0500dd58f1c7b5845b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.222864   18382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key ...
	I1204 19:53:14.222876   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key: {Name:mkfdb9d0829aab5b78a2c9145de4a14ea590c43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:14.223067   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 19:53:14.223107   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 19:53:14.223142   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 19:53:14.223169   18382 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 19:53:14.223806   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 19:53:14.251293   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 19:53:14.285706   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 19:53:14.313559   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 19:53:14.336263   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 19:53:14.358582   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 19:53:14.380032   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 19:53:14.401302   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 19:53:14.422894   18382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 19:53:14.445237   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 19:53:14.460499   18382 ssh_runner.go:195] Run: openssl version
	I1204 19:53:14.465971   18382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 19:53:14.475424   18382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.479312   18382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.479383   18382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 19:53:14.484736   18382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 19:53:14.494033   18382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 19:53:14.497876   18382 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 19:53:14.497938   18382 kubeadm.go:392] StartCluster: {Name:addons-153447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-153447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 19:53:14.498029   18382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 19:53:14.498096   18382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 19:53:14.531864   18382 cri.go:89] found id: ""
	I1204 19:53:14.531935   18382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 19:53:14.542046   18382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 19:53:14.551315   18382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 19:53:14.559730   18382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 19:53:14.559749   18382 kubeadm.go:157] found existing configuration files:
	
	I1204 19:53:14.559796   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 19:53:14.568044   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 19:53:14.568099   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 19:53:14.576333   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 19:53:14.584291   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 19:53:14.584335   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 19:53:14.592736   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 19:53:14.600541   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 19:53:14.600599   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 19:53:14.608905   18382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 19:53:14.616745   18382 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 19:53:14.616787   18382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 19:53:14.624914   18382 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 19:53:14.774309   18382 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 19:53:24.573120   18382 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 19:53:24.573216   18382 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 19:53:24.573336   18382 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 19:53:24.573466   18382 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 19:53:24.573597   18382 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 19:53:24.573688   18382 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 19:53:24.575161   18382 out.go:235]   - Generating certificates and keys ...
	I1204 19:53:24.575271   18382 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 19:53:24.575334   18382 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 19:53:24.575432   18382 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 19:53:24.575489   18382 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 19:53:24.575540   18382 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 19:53:24.575583   18382 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 19:53:24.575651   18382 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 19:53:24.575784   18382 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-153447 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1204 19:53:24.575863   18382 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 19:53:24.576033   18382 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-153447 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1204 19:53:24.576130   18382 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 19:53:24.576214   18382 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 19:53:24.576278   18382 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 19:53:24.576357   18382 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 19:53:24.576410   18382 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 19:53:24.576463   18382 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 19:53:24.576523   18382 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 19:53:24.576583   18382 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 19:53:24.576641   18382 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 19:53:24.576720   18382 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 19:53:24.576781   18382 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 19:53:24.577986   18382 out.go:235]   - Booting up control plane ...
	I1204 19:53:24.578076   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 19:53:24.578147   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 19:53:24.578210   18382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 19:53:24.578299   18382 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 19:53:24.578374   18382 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 19:53:24.578425   18382 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 19:53:24.578535   18382 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 19:53:24.578628   18382 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 19:53:24.578679   18382 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003127844s
	I1204 19:53:24.578741   18382 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 19:53:24.578798   18382 kubeadm.go:310] [api-check] The API server is healthy after 5.001668295s
	I1204 19:53:24.578886   18382 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 19:53:24.579012   18382 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 19:53:24.579105   18382 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 19:53:24.579292   18382 kubeadm.go:310] [mark-control-plane] Marking the node addons-153447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 19:53:24.579355   18382 kubeadm.go:310] [bootstrap-token] Using token: 4bg971.gwggowzkc8ok3y10
	I1204 19:53:24.581425   18382 out.go:235]   - Configuring RBAC rules ...
	I1204 19:53:24.581515   18382 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 19:53:24.581585   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 19:53:24.581705   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 19:53:24.581826   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 19:53:24.581942   18382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 19:53:24.582045   18382 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 19:53:24.582147   18382 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 19:53:24.582186   18382 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 19:53:24.582248   18382 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 19:53:24.582259   18382 kubeadm.go:310] 
	I1204 19:53:24.582347   18382 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 19:53:24.582355   18382 kubeadm.go:310] 
	I1204 19:53:24.582463   18382 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 19:53:24.582471   18382 kubeadm.go:310] 
	I1204 19:53:24.582507   18382 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 19:53:24.582590   18382 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 19:53:24.582663   18382 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 19:53:24.582672   18382 kubeadm.go:310] 
	I1204 19:53:24.582745   18382 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 19:53:24.582754   18382 kubeadm.go:310] 
	I1204 19:53:24.582826   18382 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 19:53:24.582836   18382 kubeadm.go:310] 
	I1204 19:53:24.582887   18382 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 19:53:24.582963   18382 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 19:53:24.583047   18382 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 19:53:24.583055   18382 kubeadm.go:310] 
	I1204 19:53:24.583141   18382 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 19:53:24.583215   18382 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 19:53:24.583222   18382 kubeadm.go:310] 
	I1204 19:53:24.583294   18382 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4bg971.gwggowzkc8ok3y10 \
	I1204 19:53:24.583424   18382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 19:53:24.583448   18382 kubeadm.go:310] 	--control-plane 
	I1204 19:53:24.583458   18382 kubeadm.go:310] 
	I1204 19:53:24.583533   18382 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 19:53:24.583539   18382 kubeadm.go:310] 
	I1204 19:53:24.583611   18382 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4bg971.gwggowzkc8ok3y10 \
	I1204 19:53:24.583708   18382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 19:53:24.583718   18382 cni.go:84] Creating CNI manager for ""
	I1204 19:53:24.583724   18382 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:53:24.585128   18382 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 19:53:24.586377   18382 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 19:53:24.597643   18382 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 19:53:24.615131   18382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 19:53:24.615224   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:24.615266   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153447 minikube.k8s.io/updated_at=2024_12_04T19_53_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=addons-153447 minikube.k8s.io/primary=true
	I1204 19:53:24.641057   18382 ops.go:34] apiserver oom_adj: -16
	I1204 19:53:24.769456   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:25.269645   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:25.770431   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:26.269624   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:26.770299   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:27.269580   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:27.769760   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:28.270087   18382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 19:53:28.349514   18382 kubeadm.go:1113] duration metric: took 3.73435911s to wait for elevateKubeSystemPrivileges
	I1204 19:53:28.349546   18382 kubeadm.go:394] duration metric: took 13.851614256s to StartCluster
	I1204 19:53:28.349562   18382 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:28.349670   18382 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:53:28.349994   18382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 19:53:28.350170   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 19:53:28.350188   18382 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 19:53:28.350234   18382 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 19:53:28.350355   18382 addons.go:69] Setting yakd=true in profile "addons-153447"
	I1204 19:53:28.350364   18382 addons.go:69] Setting ingress=true in profile "addons-153447"
	I1204 19:53:28.350377   18382 addons.go:234] Setting addon yakd=true in "addons-153447"
	I1204 19:53:28.350381   18382 addons.go:234] Setting addon ingress=true in "addons-153447"
	I1204 19:53:28.350389   18382 addons.go:69] Setting registry=true in profile "addons-153447"
	I1204 19:53:28.350408   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:28.350415   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350423   18382 addons.go:69] Setting storage-provisioner=true in profile "addons-153447"
	I1204 19:53:28.350437   18382 addons.go:69] Setting ingress-dns=true in profile "addons-153447"
	I1204 19:53:28.350441   18382 addons.go:234] Setting addon storage-provisioner=true in "addons-153447"
	I1204 19:53:28.350415   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350449   18382 addons.go:234] Setting addon ingress-dns=true in "addons-153447"
	I1204 19:53:28.350461   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350464   18382 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-153447"
	I1204 19:53:28.350463   18382 addons.go:69] Setting metrics-server=true in profile "addons-153447"
	I1204 19:53:28.350488   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350496   18382 addons.go:234] Setting addon metrics-server=true in "addons-153447"
	I1204 19:53:28.350518   18382 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-153447"
	I1204 19:53:28.350528   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350544   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350558   18382 addons.go:69] Setting default-storageclass=true in profile "addons-153447"
	I1204 19:53:28.350571   18382 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-153447"
	I1204 19:53:28.350866   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.350867   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.350889   18382 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-153447"
	I1204 19:53:28.350890   18382 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-153447"
	I1204 19:53:28.350896   18382 addons.go:69] Setting inspektor-gadget=true in profile "addons-153447"
	I1204 19:53:28.350902   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350902   18382 addons.go:69] Setting gcp-auth=true in profile "addons-153447"
	I1204 19:53:28.350907   18382 addons.go:69] Setting volcano=true in profile "addons-153447"
	I1204 19:53:28.350913   18382 addons.go:234] Setting addon inspektor-gadget=true in "addons-153447"
	I1204 19:53:28.350912   18382 addons.go:69] Setting cloud-spanner=true in profile "addons-153447"
	I1204 19:53:28.350919   18382 addons.go:234] Setting addon volcano=true in "addons-153447"
	I1204 19:53:28.350920   18382 mustload.go:65] Loading cluster: addons-153447
	I1204 19:53:28.350925   18382 addons.go:234] Setting addon cloud-spanner=true in "addons-153447"
	I1204 19:53:28.350934   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350939   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350940   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350944   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.351141   18382 config.go:182] Loaded profile config "addons-153447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 19:53:28.351278   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351315   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351355   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351355   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351408   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351409   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351507   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.351537   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.351571   18382 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-153447"
	I1204 19:53:28.351603   18382 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-153447"
	I1204 19:53:28.351636   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350426   18382 addons.go:234] Setting addon registry=true in "addons-153447"
	I1204 19:53:28.351820   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350902   18382 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-153447"
	I1204 19:53:28.352023   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.352053   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352084   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352184   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352206   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.350900   18382 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153447"
	I1204 19:53:28.352614   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352619   18382 addons.go:69] Setting volumesnapshots=true in profile "addons-153447"
	I1204 19:53:28.352635   18382 addons.go:234] Setting addon volumesnapshots=true in "addons-153447"
	I1204 19:53:28.352643   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352659   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.350919   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352744   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.352783   18382 out.go:177] * Verifying Kubernetes components...
	I1204 19:53:28.352940   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.352969   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.353372   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.353401   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.353515   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.353584   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.354199   18382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 19:53:28.371893   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I1204 19:53:28.372142   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I1204 19:53:28.372373   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1204 19:53:28.372491   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I1204 19:53:28.372569   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373502   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373596   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373629   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.373650   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.373662   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.373734   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I1204 19:53:28.374331   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.374349   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.374487   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.374500   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.374561   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.374632   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.374811   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.374880   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.374934   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I1204 19:53:28.375112   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.375124   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.375608   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.375642   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.375758   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.375768   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.375820   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.376366   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.376377   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.376849   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.376851   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.377065   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.377086   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.379832   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.379873   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.379920   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.379951   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.380401   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.380436   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.382419   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.382454   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.383144   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.383181   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.383874   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.383956   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I1204 19:53:28.384491   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.384527   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.379836   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.384748   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.384749   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.385784   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.385801   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.386145   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.386636   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.386667   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.401284   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I1204 19:53:28.401844   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.402320   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.402340   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.402728   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.402932   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.421803   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I1204 19:53:28.422017   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33023
	I1204 19:53:28.422736   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.423288   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.423308   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.423687   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I1204 19:53:28.423813   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.423911   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I1204 19:53:28.424058   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I1204 19:53:28.424258   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.424366   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.424940   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.424959   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.425027   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425178   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.425195   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.425265   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I1204 19:53:28.425416   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
	I1204 19:53:28.425548   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425626   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.425824   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.425954   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.425962   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426273   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I1204 19:53:28.426440   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.426455   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426463   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.426477   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.426522   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.426862   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.426902   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.427081   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427126   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427168   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.427198   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.427239   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.427585   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.427612   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.427832   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427841   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.427883   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.427928   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I1204 19:53:28.428083   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.428112   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.429008   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.429083   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.429099   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.429443   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.429459   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.429999   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.430013   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.430583   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.430605   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.430621   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.430635   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.430722   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.431169   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1204 19:53:28.431707   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.432165   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.432347   18382 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-153447"
	I1204 19:53:28.432383   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.432597   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.432621   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.433015   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.433166   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.433186   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.433528   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.433620   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I1204 19:53:28.433763   18382 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 19:53:28.434053   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.434086   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.434605   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.435116   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 19:53:28.435293   18382 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 19:53:28.435305   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 19:53:28.435324   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.435388   18382 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 19:53:28.436158   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.436176   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.436674   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.436706   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.437207   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.437227   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.437427   18382 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 19:53:28.437442   18382 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 19:53:28.437467   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.438788   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.438821   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.438829   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.438938   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:28.439321   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.439341   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.440624   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.440803   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.440911   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.441023   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.441399   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:28.441887   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.442606   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.442628   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.442846   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.443041   18382 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 19:53:28.443068   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 19:53:28.443088   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.443049   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.443725   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.443839   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.444003   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.444271   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.446859   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.447505   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.447536   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.447693   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I1204 19:53:28.447870   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.448051   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.448211   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.448352   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.449016   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.449808   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.449825   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.450226   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.450379   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.452213   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.452547   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.452898   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:28.452909   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:28.454163   18382 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 19:53:28.454617   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:28.454648   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:28.454655   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:28.454667   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:28.454674   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:28.454906   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:28.454922   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	W1204 19:53:28.454997   18382 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 19:53:28.455328   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 19:53:28.455348   18382 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 19:53:28.455393   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.456159   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I1204 19:53:28.456640   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.457083   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.457100   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.457413   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.457903   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.457935   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.459315   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.459799   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.459820   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.459989   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.460201   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.460352   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.460491   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.466520   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 19:53:28.467003   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.467534   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.467550   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.467975   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.468137   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.469373   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1204 19:53:28.469698   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.470723   18382 addons.go:234] Setting addon default-storageclass=true in "addons-153447"
	I1204 19:53:28.470759   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:28.471140   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.471189   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.471494   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1204 19:53:28.471967   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.472438   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.472464   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.472623   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I1204 19:53:28.472890   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.472906   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.472980   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.473240   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.473306   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.473442   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.473523   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.474507   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.474527   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.474927   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.475129   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.475186   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.476013   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.477612   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.477805   18382 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 19:53:28.477993   18382 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 19:53:28.478941   18382 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 19:53:28.478960   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 19:53:28.478978   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.479058   18382 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 19:53:28.479155   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 19:53:28.479166   18382 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 19:53:28.479183   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.480755   18382 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 19:53:28.480773   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 19:53:28.480790   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.482139   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I1204 19:53:28.482394   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I1204 19:53:28.482799   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.482841   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.483016   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483245   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.483268   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.483425   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.483438   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.483505   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.483523   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483555   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.483623   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.483743   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.483817   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.483864   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.483907   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.484030   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.484141   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.484225   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.484245   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.484271   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.484535   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.484782   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.484928   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.485059   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.485927   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.486309   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.486331   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I1204 19:53:28.486793   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.486829   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.487088   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.487249   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.487316   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.487521   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.487538   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.487709   18382 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 19:53:28.487874   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I1204 19:53:28.487699   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.488508   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.488664   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.488682   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.488988   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.488993   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 19:53:28.489093   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.489109   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.489253   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.489539   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.489749   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.490026   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
	I1204 19:53:28.490098   18382 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 19:53:28.490419   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40031
	I1204 19:53:28.490669   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.491054   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.491211   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.491223   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.491290   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 19:53:28.491355   18382 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 19:53:28.491464   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 19:53:28.491488   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.491408   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.492468   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.492661   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.492702   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.493403   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.493530   18382 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 19:53:28.493733   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.493780   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.494296   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.494332   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.494515   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 19:53:28.495199   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.495676   18382 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 19:53:28.495677   18382 out.go:177]   - Using image docker.io/busybox:stable
	I1204 19:53:28.496447   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.496489   18382 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 19:53:28.497186   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.497209   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.497221   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.497269   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 19:53:28.497808   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.497876   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1204 19:53:28.497362   18382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 19:53:28.498014   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 19:53:28.498028   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.497470   18382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 19:53:28.498065   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 19:53:28.498076   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.498120   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.498151   18382 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 19:53:28.498166   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 19:53:28.498185   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.499117   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.499232   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.499727   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.499747   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.500192   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 19:53:28.500531   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.501188   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:28.501236   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:28.502177   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.502668   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 19:53:28.503022   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503537   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503582   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.503621   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.503762   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.503976   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.504063   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.504081   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.504150   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.504286   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.504313   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.504469   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.504581   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.504611   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.504631   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.504797   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.504841   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.505047   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.505129   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 19:53:28.505230   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.505412   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.507494   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 19:53:28.508625   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 19:53:28.508646   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 19:53:28.508664   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.511086   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.511454   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.511472   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.511658   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.511811   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.512002   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.512087   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.517756   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I1204 19:53:28.518229   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.518712   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.518736   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.519078   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.519272   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.521101   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.521344   18382 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 19:53:28.521362   18382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 19:53:28.521378   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.523994   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I1204 19:53:28.524368   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:28.524468   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.524901   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.524925   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.525068   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:28.525082   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:28.525339   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.525398   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:28.525502   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.525706   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.525745   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:28.525905   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.527025   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:28.528589   18382 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 19:53:28.529637   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 19:53:28.529654   18382 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 19:53:28.529672   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:28.532265   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.532655   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:28.532675   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:28.532827   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:28.532960   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:28.533069   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:28.533183   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:28.791136   18382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 19:53:28.791356   18382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 19:53:28.836263   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 19:53:28.877959   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 19:53:28.889693   18382 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 19:53:28.889730   18382 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 19:53:28.898447   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 19:53:28.898474   18382 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 19:53:28.916551   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 19:53:28.916580   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 19:53:28.934017   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 19:53:28.937227   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 19:53:28.938167   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 19:53:28.938183   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 19:53:28.969198   18382 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 19:53:28.969226   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 19:53:28.974617   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 19:53:28.992706   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 19:53:29.024371   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 19:53:29.057413   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 19:53:29.057443   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 19:53:29.067432   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 19:53:29.067457   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 19:53:29.079146   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 19:53:29.090706   18382 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 19:53:29.090733   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 19:53:29.112878   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 19:53:29.127933   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 19:53:29.127960   18382 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 19:53:29.152692   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 19:53:29.152720   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 19:53:29.299978   18382 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 19:53:29.300004   18382 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 19:53:29.310519   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 19:53:29.310539   18382 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 19:53:29.330201   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 19:53:29.330229   18382 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 19:53:29.384443   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 19:53:29.442873   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 19:53:29.442902   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 19:53:29.485146   18382 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 19:53:29.485175   18382 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 19:53:29.542323   18382 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 19:53:29.542348   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 19:53:29.561483   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 19:53:29.561509   18382 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 19:53:29.629813   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 19:53:29.629837   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 19:53:29.689896   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 19:53:29.741139   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 19:53:29.800463   18382 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:29.800499   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 19:53:29.938201   18382 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 19:53:29.938230   18382 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 19:53:30.097846   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:30.279056   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 19:53:30.279080   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 19:53:30.466114   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 19:53:30.466143   18382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 19:53:30.570343   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 19:53:30.570368   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 19:53:30.907114   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 19:53:30.907158   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 19:53:31.063392   18382 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.271962694s)
	I1204 19:53:31.063429   18382 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 19:53:31.063419   18382 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.272247284s)
	I1204 19:53:31.064120   18382 node_ready.go:35] waiting up to 6m0s for node "addons-153447" to be "Ready" ...
	I1204 19:53:31.070802   18382 node_ready.go:49] node "addons-153447" has status "Ready":"True"
	I1204 19:53:31.070825   18382 node_ready.go:38] duration metric: took 6.686231ms for node "addons-153447" to be "Ready" ...
	I1204 19:53:31.070834   18382 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 19:53:31.083328   18382 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:31.225222   18382 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 19:53:31.225253   18382 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 19:53:31.434331   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 19:53:31.598569   18382 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153447" context rescaled to 1 replicas
	I1204 19:53:31.929357   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.093052264s)
	I1204 19:53:31.929395   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.051400582s)
	I1204 19:53:31.929419   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929433   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929433   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929448   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929829   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.929868   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.929831   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.929890   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.929894   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.929903   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.929915   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929929   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.929916   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:31.929991   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:31.930199   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.930211   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:31.931473   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:31.931481   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:31.931487   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464063   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.530007938s)
	I1204 19:53:32.464101   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.526841327s)
	I1204 19:53:32.464114   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464126   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464135   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464169   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464504   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464512   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.464517   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.464523   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464530   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.464531   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464539   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464547   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464533   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:32.464616   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:32.464963   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:32.464998   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.465005   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:32.465040   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:32.465057   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:33.107164   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:34.519468   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.544815218s)
	I1204 19:53:34.519522   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.519535   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.519783   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:34.519832   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.519845   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.519855   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.519864   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.520136   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.520193   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.632436   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:34.632464   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:34.632830   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:34.632851   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:34.632852   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:35.146477   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:35.440551   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 19:53:35.440590   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:35.443839   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:35.444235   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:35.444264   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:35.444492   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:35.444694   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:35.444842   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:35.444964   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:36.029037   18382 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 19:53:36.231598   18382 addons.go:234] Setting addon gcp-auth=true in "addons-153447"
	I1204 19:53:36.231654   18382 host.go:66] Checking if "addons-153447" exists ...
	I1204 19:53:36.232071   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:36.232129   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:36.247806   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I1204 19:53:36.248173   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:36.248640   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:36.248657   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:36.248932   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:36.249416   18382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 19:53:36.249451   18382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 19:53:36.264438   18382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I1204 19:53:36.264862   18382 main.go:141] libmachine: () Calling .GetVersion
	I1204 19:53:36.265398   18382 main.go:141] libmachine: Using API Version  1
	I1204 19:53:36.265427   18382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 19:53:36.265755   18382 main.go:141] libmachine: () Calling .GetMachineName
	I1204 19:53:36.265938   18382 main.go:141] libmachine: (addons-153447) Calling .GetState
	I1204 19:53:36.267605   18382 main.go:141] libmachine: (addons-153447) Calling .DriverName
	I1204 19:53:36.267848   18382 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 19:53:36.267871   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHHostname
	I1204 19:53:36.271328   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:36.271858   18382 main.go:141] libmachine: (addons-153447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:ce:2c", ip: ""} in network mk-addons-153447: {Iface:virbr1 ExpiryTime:2024-12-04 20:53:01 +0000 UTC Type:0 Mac:52:54:00:39:ce:2c Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-153447 Clientid:01:52:54:00:39:ce:2c}
	I1204 19:53:36.271887   18382 main.go:141] libmachine: (addons-153447) DBG | domain addons-153447 has defined IP address 192.168.39.11 and MAC address 52:54:00:39:ce:2c in network mk-addons-153447
	I1204 19:53:36.272083   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHPort
	I1204 19:53:36.272317   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHKeyPath
	I1204 19:53:36.272497   18382 main.go:141] libmachine: (addons-153447) Calling .GetSSHUsername
	I1204 19:53:36.272645   18382 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/addons-153447/id_rsa Username:docker}
	I1204 19:53:37.182661   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:37.274470   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.281725927s)
	I1204 19:53:37.274525   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274537   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274533   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.25012456s)
	I1204 19:53:37.274572   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274590   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274590   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.195410937s)
	I1204 19:53:37.274620   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274637   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274670   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.161761122s)
	I1204 19:53:37.274701   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274715   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274731   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.890244642s)
	I1204 19:53:37.274755   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274771   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274833   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.584905888s)
	I1204 19:53:37.274855   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274865   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.274951   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.533776535s)
	I1204 19:53:37.274971   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.274981   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.275105   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.177211394s)
	W1204 19:53:37.275135   18382 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 19:53:37.275179   18382 retry.go:31] will retry after 231.369537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 19:53:37.275541   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.275572   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.275579   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.275587   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.275593   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.275877   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.275903   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.275910   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.275927   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.275933   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276910   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276926   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276935   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276939   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276945   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276949   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276956   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276963   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.276970   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.276914   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.276983   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.276991   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.276999   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277125   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277167   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277186   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277193   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277201   18382 addons.go:475] Verifying addon metrics-server=true in "addons-153447"
	I1204 19:53:37.277245   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277266   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277272   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277432   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277443   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277456   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277485   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277493   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277628   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277654   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277661   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277667   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.277673   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277719   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277744   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277753   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277761   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.277767   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.277831   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277900   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.277950   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277956   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277964   18382 addons.go:475] Verifying addon registry=true in "addons-153447"
	I1204 19:53:37.277975   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.277986   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.277994   18382 addons.go:475] Verifying addon ingress=true in "addons-153447"
	I1204 19:53:37.278219   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.278242   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.278249   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.279985   18382 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153447 service yakd-dashboard -n yakd-dashboard
	
	I1204 19:53:37.279998   18382 out.go:177] * Verifying registry addon...
	I1204 19:53:37.280942   18382 out.go:177] * Verifying ingress addon...
	I1204 19:53:37.282404   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 19:53:37.283342   18382 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 19:53:37.309800   18382 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 19:53:37.309821   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:37.316903   18382 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 19:53:37.316930   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:37.340004   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:37.340026   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:37.340316   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:37.340333   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:37.340366   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:37.507358   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 19:53:37.794641   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:37.794641   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:38.291294   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:38.291550   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:38.580607   18382 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.312733098s)
	I1204 19:53:38.580624   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.146246374s)
	I1204 19:53:38.580671   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:38.580687   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:38.580966   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:38.581016   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:38.581024   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:38.581036   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:38.581055   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:38.581295   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:38.581315   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:38.581328   18382 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-153447"
	I1204 19:53:38.582309   18382 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 19:53:38.583207   18382 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 19:53:38.584718   18382 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 19:53:38.585668   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 19:53:38.585825   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 19:53:38.585842   18382 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 19:53:38.590761   18382 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 19:53:38.590783   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:38.692452   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 19:53:38.692483   18382 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 19:53:38.713516   18382 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 19:53:38.713543   18382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 19:53:38.732720   18382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 19:53:39.078583   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.078997   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:39.101007   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:39.287624   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.287894   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:39.505167   18382 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.997747672s)
	I1204 19:53:39.505215   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.505247   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.505513   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:39.505565   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.505580   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.505596   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.505609   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.505813   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.505829   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.590974   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:39.609504   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:39.712750   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.712773   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.713043   18382 main.go:141] libmachine: (addons-153447) DBG | Closing plugin on server side
	I1204 19:53:39.713088   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.713110   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.713136   18382 main.go:141] libmachine: Making call to close driver server
	I1204 19:53:39.713145   18382 main.go:141] libmachine: (addons-153447) Calling .Close
	I1204 19:53:39.713390   18382 main.go:141] libmachine: Successfully made call to close driver server
	I1204 19:53:39.713407   18382 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 19:53:39.714283   18382 addons.go:475] Verifying addon gcp-auth=true in "addons-153447"
	I1204 19:53:39.715755   18382 out.go:177] * Verifying gcp-auth addon...
	I1204 19:53:39.717529   18382 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 19:53:39.725153   18382 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 19:53:39.725183   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:39.792045   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:39.792342   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.093092   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:40.221453   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:40.289727   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:40.290792   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.597906   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:40.726081   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:40.787615   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:40.787866   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.091556   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:41.221152   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:41.287411   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:41.287990   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.592130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:41.721074   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:41.786508   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:41.787790   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:42.090803   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:42.091086   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:42.221132   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:42.286683   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:42.288900   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:42.595761   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:42.720481   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:42.787536   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:42.787823   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:43.092982   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:43.221683   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:43.287085   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:43.287643   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:43.591288   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:43.721509   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:43.787451   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:43.788093   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:44.099453   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:44.100509   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:44.221826   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:44.285844   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:44.287028   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:44.599351   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:44.721065   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:44.788303   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:44.791732   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:45.092444   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:45.221951   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:45.288145   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:45.289113   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:45.589541   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:45.721219   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:45.786380   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:45.787955   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:46.093922   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:46.221524   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:46.286258   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:46.289315   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:46.590042   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:46.590231   18382 pod_ready.go:103] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:46.721376   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:46.787242   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:46.788037   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:47.090896   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:47.222253   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:47.286820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:47.287353   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:47.590604   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:47.720855   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:47.785745   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:47.787650   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:48.091893   18382 pod_ready.go:93] pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace has status "Ready":"True"
	I1204 19:53:48.091918   18382 pod_ready.go:82] duration metric: took 17.008558173s for pod "amd-gpu-device-plugin-7r8d9" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.091931   18382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.092225   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:48.094112   18382 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mmw65" not found
	I1204 19:53:48.094137   18382 pod_ready.go:82] duration metric: took 2.198228ms for pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace to be "Ready" ...
	E1204 19:53:48.094153   18382 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mmw65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mmw65" not found
	I1204 19:53:48.094162   18382 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace to be "Ready" ...
	I1204 19:53:48.221897   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:48.286495   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:48.288634   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:48.591319   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:48.720855   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:48.785860   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:48.788376   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:49.160677   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:49.220967   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:49.285987   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:49.288111   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:49.590795   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:49.721810   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:49.785939   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:49.787561   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.091620   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:50.102768   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:50.222827   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:50.285638   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:50.287912   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.590452   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:50.721499   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:50.788050   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:50.788663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.090149   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:51.221187   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:51.286451   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.288997   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:51.590818   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:51.720673   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:51.785842   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:51.788614   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:52.091049   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:52.221907   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:52.285848   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:52.288067   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:52.590755   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:52.600325   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:52.721141   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:52.785986   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:52.790868   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:53.090712   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:53.220484   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:53.287364   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:53.289743   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:53.591164   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:53.721130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:53.787245   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:53.789514   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:54.090554   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:54.223115   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:54.289581   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:54.289738   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:54.590997   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:54.601032   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:54.721208   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:54.787299   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:54.787356   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:55.090725   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:55.223010   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:55.288330   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:55.290916   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:55.591687   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:55.721597   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:55.787949   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:55.788445   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:56.089821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:56.220633   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:56.333348   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:56.334066   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:56.590455   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:56.601421   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:56.721524   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:56.787127   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:56.787683   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:57.090476   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:57.222842   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:57.286327   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:57.287623   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:57.589974   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:57.721708   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:57.785931   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:57.787427   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:58.090291   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:58.223710   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:58.285821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:58.290468   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:58.592236   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:58.601633   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:53:58.721346   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:58.786500   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:58.788053   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:59.090968   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:59.220884   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:59.286057   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:59.287538   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:53:59.590960   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:53:59.721710   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:53:59.785844   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:53:59.788673   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:00.090723   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:00.221780   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:00.287893   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:00.289481   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:00.590392   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:00.720361   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:00.786496   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:00.786816   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:01.090941   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:01.100053   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:01.221130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:01.287209   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:01.288152   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:01.590697   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:01.720602   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:01.787849   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:01.788102   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:02.092155   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:02.222260   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:02.286374   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:02.287723   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:02.592905   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:02.721156   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:02.786574   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:02.787525   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:03.090973   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:03.221261   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:03.286734   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:03.286856   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:03.590756   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:03.599636   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:03.721758   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:03.785672   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:03.788676   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:04.092159   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:04.221037   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:04.286639   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:04.288086   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:04.591562   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:04.721488   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:04.787493   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:04.787804   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:05.092011   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:05.222016   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:05.286019   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:05.287689   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:05.590663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:05.600741   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:05.722028   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:05.786504   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:05.788246   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.090143   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:06.220977   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:06.286730   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:06.287080   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.970225   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:06.970378   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:06.971228   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:06.971320   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.091025   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.221228   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:07.287258   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:07.288203   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:07.591145   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:07.721741   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:07.786445   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:07.787160   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:08.092369   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:08.100015   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:08.221217   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:08.286361   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:08.287695   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:08.590923   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:08.721431   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:08.787421   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:08.787702   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:09.090704   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:09.220799   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:09.287084   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:09.288233   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:09.590314   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:09.721414   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:09.786454   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:09.788167   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.090501   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:10.100077   18382 pod_ready.go:103] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"False"
	I1204 19:54:10.220898   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:10.286143   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:10.287835   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.590567   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:10.723754   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:10.827207   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:10.827613   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:11.092813   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:11.221861   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:11.289113   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 19:54:11.289274   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:11.591107   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:11.722174   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:11.786439   18382 kapi.go:107] duration metric: took 34.50403163s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 19:54:11.787475   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.090154   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:12.221174   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:12.287982   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.590833   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:12.599432   18382 pod_ready.go:93] pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.599452   18382 pod_ready.go:82] duration metric: took 24.505278556s for pod "coredns-7c65d6cfc9-mq69t" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.599465   18382 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.607336   18382 pod_ready.go:93] pod "etcd-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.607356   18382 pod_ready.go:82] duration metric: took 7.883774ms for pod "etcd-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.607364   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.612511   18382 pod_ready.go:93] pod "kube-apiserver-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.612539   18382 pod_ready.go:82] duration metric: took 5.167723ms for pod "kube-apiserver-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.612552   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.617426   18382 pod_ready.go:93] pod "kube-controller-manager-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.617451   18382 pod_ready.go:82] duration metric: took 4.890876ms for pod "kube-controller-manager-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.617465   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zf92b" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.621958   18382 pod_ready.go:93] pod "kube-proxy-zf92b" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.621983   18382 pod_ready.go:82] duration metric: took 4.508931ms for pod "kube-proxy-zf92b" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.621994   18382 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.720692   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:12.787986   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:12.997495   18382 pod_ready.go:93] pod "kube-scheduler-addons-153447" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:12.997521   18382 pod_ready.go:82] duration metric: took 375.518192ms for pod "kube-scheduler-addons-153447" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:12.997534   18382 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:13.090950   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:13.221532   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:13.322733   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:13.398260   18382 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace has status "Ready":"True"
	I1204 19:54:13.398287   18382 pod_ready.go:82] duration metric: took 400.746258ms for pod "nvidia-device-plugin-daemonset-jgz4f" in "kube-system" namespace to be "Ready" ...
	I1204 19:54:13.398295   18382 pod_ready.go:39] duration metric: took 42.327451842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 19:54:13.398311   18382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 19:54:13.398368   18382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 19:54:13.414944   18382 api_server.go:72] duration metric: took 45.064720695s to wait for apiserver process to appear ...
	I1204 19:54:13.414974   18382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 19:54:13.414997   18382 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I1204 19:54:13.418912   18382 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I1204 19:54:13.419830   18382 api_server.go:141] control plane version: v1.31.2
	I1204 19:54:13.419853   18382 api_server.go:131] duration metric: took 4.870261ms to wait for apiserver health ...
	I1204 19:54:13.419861   18382 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 19:54:13.590267   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:13.604583   18382 system_pods.go:59] 18 kube-system pods found
	I1204 19:54:13.604627   18382 system_pods.go:61] "amd-gpu-device-plugin-7r8d9" [fe74ca1b-56c6-4e61-8ec2-380d38f63b82] Running
	I1204 19:54:13.604635   18382 system_pods.go:61] "coredns-7c65d6cfc9-mq69t" [cc725230-25f4-41a8-8292-110a5d46949e] Running
	I1204 19:54:13.604647   18382 system_pods.go:61] "csi-hostpath-attacher-0" [f75aea48-e36c-4a2a-bce3-111bfd1969e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1204 19:54:13.604656   18382 system_pods.go:61] "csi-hostpath-resizer-0" [d9731c2f-4a6a-4288-a027-a36c4c6d07e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1204 19:54:13.604669   18382 system_pods.go:61] "csi-hostpathplugin-n2cqq" [83b3d723-9b62-4978-be37-b785e988c34a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1204 19:54:13.604677   18382 system_pods.go:61] "etcd-addons-153447" [1369d72d-8e1c-479c-88e3-c58557965f52] Running
	I1204 19:54:13.604688   18382 system_pods.go:61] "kube-apiserver-addons-153447" [03222aee-bd05-4835-9156-7cc8960c9f9e] Running
	I1204 19:54:13.604694   18382 system_pods.go:61] "kube-controller-manager-addons-153447" [1b967963-da0a-4811-be88-38bbbff51d02] Running
	I1204 19:54:13.604700   18382 system_pods.go:61] "kube-ingress-dns-minikube" [6fdb24ab-7096-4556-a232-4d26f7552507] Running
	I1204 19:54:13.604706   18382 system_pods.go:61] "kube-proxy-zf92b" [c194c0d0-590f-41dc-9ca2-83e611918692] Running
	I1204 19:54:13.604713   18382 system_pods.go:61] "kube-scheduler-addons-153447" [3b9516d1-0cbf-4986-b0e2-5dc037fd3bd3] Running
	I1204 19:54:13.604725   18382 system_pods.go:61] "metrics-server-84c5f94fbc-gpnml" [3e5584b2-5c1f-4acb-93d3-614ecdb4794c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 19:54:13.604731   18382 system_pods.go:61] "nvidia-device-plugin-daemonset-jgz4f" [eae62c73-3a4f-42eb-baac-f18cf9160aea] Running
	I1204 19:54:13.604737   18382 system_pods.go:61] "registry-66c9cd494c-z8xlj" [cf078efa-efba-4b9e-a26c-686f93cabca9] Running
	I1204 19:54:13.604742   18382 system_pods.go:61] "registry-proxy-7c5pj" [fed31c9e-468a-4b59-b8f2-1efd30fa0e42] Running
	I1204 19:54:13.604752   18382 system_pods.go:61] "snapshot-controller-56fcc65765-hqgzv" [b68cbc05-af0c-4b3f-906c-57a6bfa5d95a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:13.604761   18382 system_pods.go:61] "snapshot-controller-56fcc65765-vdkgn" [d9ee7ee2-7750-4e15-9453-11fabb300d00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:13.604768   18382 system_pods.go:61] "storage-provisioner" [fa71a22c-f55d-460d-b2cc-7aa569c3badc] Running
	I1204 19:54:13.604776   18382 system_pods.go:74] duration metric: took 184.908661ms to wait for pod list to return data ...
	I1204 19:54:13.604789   18382 default_sa.go:34] waiting for default service account to be created ...
	I1204 19:54:13.721528   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:13.787884   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:13.796889   18382 default_sa.go:45] found service account: "default"
	I1204 19:54:13.796910   18382 default_sa.go:55] duration metric: took 192.114242ms for default service account to be created ...
	I1204 19:54:13.796921   18382 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 19:54:14.003711   18382 system_pods.go:86] 18 kube-system pods found
	I1204 19:54:14.003737   18382 system_pods.go:89] "amd-gpu-device-plugin-7r8d9" [fe74ca1b-56c6-4e61-8ec2-380d38f63b82] Running
	I1204 19:54:14.003744   18382 system_pods.go:89] "coredns-7c65d6cfc9-mq69t" [cc725230-25f4-41a8-8292-110a5d46949e] Running
	I1204 19:54:14.003750   18382 system_pods.go:89] "csi-hostpath-attacher-0" [f75aea48-e36c-4a2a-bce3-111bfd1969e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1204 19:54:14.003759   18382 system_pods.go:89] "csi-hostpath-resizer-0" [d9731c2f-4a6a-4288-a027-a36c4c6d07e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1204 19:54:14.003766   18382 system_pods.go:89] "csi-hostpathplugin-n2cqq" [83b3d723-9b62-4978-be37-b785e988c34a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1204 19:54:14.003770   18382 system_pods.go:89] "etcd-addons-153447" [1369d72d-8e1c-479c-88e3-c58557965f52] Running
	I1204 19:54:14.003774   18382 system_pods.go:89] "kube-apiserver-addons-153447" [03222aee-bd05-4835-9156-7cc8960c9f9e] Running
	I1204 19:54:14.003778   18382 system_pods.go:89] "kube-controller-manager-addons-153447" [1b967963-da0a-4811-be88-38bbbff51d02] Running
	I1204 19:54:14.003782   18382 system_pods.go:89] "kube-ingress-dns-minikube" [6fdb24ab-7096-4556-a232-4d26f7552507] Running
	I1204 19:54:14.003785   18382 system_pods.go:89] "kube-proxy-zf92b" [c194c0d0-590f-41dc-9ca2-83e611918692] Running
	I1204 19:54:14.003791   18382 system_pods.go:89] "kube-scheduler-addons-153447" [3b9516d1-0cbf-4986-b0e2-5dc037fd3bd3] Running
	I1204 19:54:14.003797   18382 system_pods.go:89] "metrics-server-84c5f94fbc-gpnml" [3e5584b2-5c1f-4acb-93d3-614ecdb4794c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 19:54:14.003805   18382 system_pods.go:89] "nvidia-device-plugin-daemonset-jgz4f" [eae62c73-3a4f-42eb-baac-f18cf9160aea] Running
	I1204 19:54:14.003809   18382 system_pods.go:89] "registry-66c9cd494c-z8xlj" [cf078efa-efba-4b9e-a26c-686f93cabca9] Running
	I1204 19:54:14.003812   18382 system_pods.go:89] "registry-proxy-7c5pj" [fed31c9e-468a-4b59-b8f2-1efd30fa0e42] Running
	I1204 19:54:14.003819   18382 system_pods.go:89] "snapshot-controller-56fcc65765-hqgzv" [b68cbc05-af0c-4b3f-906c-57a6bfa5d95a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:14.003827   18382 system_pods.go:89] "snapshot-controller-56fcc65765-vdkgn" [d9ee7ee2-7750-4e15-9453-11fabb300d00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1204 19:54:14.003834   18382 system_pods.go:89] "storage-provisioner" [fa71a22c-f55d-460d-b2cc-7aa569c3badc] Running
	I1204 19:54:14.003841   18382 system_pods.go:126] duration metric: took 206.914922ms to wait for k8s-apps to be running ...
	I1204 19:54:14.003848   18382 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 19:54:14.003884   18382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 19:54:14.022199   18382 system_svc.go:56] duration metric: took 18.341786ms WaitForService to wait for kubelet
	I1204 19:54:14.022228   18382 kubeadm.go:582] duration metric: took 45.672008618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 19:54:14.022251   18382 node_conditions.go:102] verifying NodePressure condition ...
	I1204 19:54:14.090694   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:14.198199   18382 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 19:54:14.198238   18382 node_conditions.go:123] node cpu capacity is 2
	I1204 19:54:14.198255   18382 node_conditions.go:105] duration metric: took 175.998137ms to run NodePressure ...
	I1204 19:54:14.198271   18382 start.go:241] waiting for startup goroutines ...
	I1204 19:54:14.220588   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:14.287177   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:14.590647   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:14.720613   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:14.789593   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:15.092345   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:15.221835   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:15.287287   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:15.591243   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:15.723305   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:15.788538   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:16.091230   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:16.221036   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:16.289108   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:16.590617   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:16.720820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:16.787472   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.092155   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:17.221205   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:17.287524   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.883687   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:17.884015   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:17.884126   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.090709   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.220786   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:18.287148   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:18.589936   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:18.721519   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:18.788626   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:19.089905   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:19.221394   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:19.287939   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:19.591521   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:19.721398   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:19.788262   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:20.090970   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:20.221222   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:20.290582   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:20.735591   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:20.736049   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:20.788025   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:21.091226   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:21.221530   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:21.288372   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:21.592293   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:21.721663   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:21.787926   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:22.091721   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:22.220975   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:22.288290   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:22.590866   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:22.721283   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:22.787943   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:23.090919   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:23.221227   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:23.287682   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:23.590113   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:23.721640   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:23.787018   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:24.091082   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:24.221493   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:24.290503   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:24.594763   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:24.720821   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.019311   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:25.119869   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:25.221136   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.287622   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:25.590446   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:25.721853   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:25.787486   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:26.090758   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:26.221861   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:26.287684   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:26.590291   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:26.721748   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:26.787550   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:27.090421   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:27.221268   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:27.287936   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:27.590342   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:27.728534   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:27.789092   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:28.090731   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:28.221357   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:28.287631   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:28.591237   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:28.721136   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:28.788255   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:29.094816   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:29.221723   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:29.324337   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:29.591209   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:29.731131   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:29.791102   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:30.090959   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:30.221022   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:30.287712   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:30.589650   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:30.721231   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.166633   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:31.174196   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:31.222657   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.287639   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:31.590185   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:31.721715   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:31.787945   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:32.090596   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:32.221007   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:32.297668   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:32.590251   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:32.722285   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:32.823389   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:33.090520   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:33.221464   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:33.288488   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:33.590506   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:33.721362   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:33.788081   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:34.090772   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:34.221190   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:34.324201   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:34.591100   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:34.720756   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:34.787553   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:35.090130   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:35.221979   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:35.323481   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:35.592963   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:35.722628   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:35.787040   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.090803   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:36.221681   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:36.287211   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.815923   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:36.816167   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:36.816679   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.091275   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.221213   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:37.324534   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:37.591362   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:37.720595   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:37.788149   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:38.090170   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:38.221938   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:38.288471   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:38.591305   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:38.721753   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:38.787820   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:39.091497   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:39.221652   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:39.287640   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:39.592701   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:39.721926   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:39.788428   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:40.090820   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:40.224114   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:40.327922   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:40.590780   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:40.720751   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:40.787444   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:41.090433   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:41.221742   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:41.290084   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:41.591323   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 19:54:41.720682   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:41.788828   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:42.090543   18382 kapi.go:107] duration metric: took 1m3.504873508s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 19:54:42.221084   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:42.287969   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:42.721269   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:42.823472   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:43.221085   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:43.324103   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:43.722482   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:43.788780   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:44.222393   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:44.288866   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:44.721251   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:44.788474   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:45.221991   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:45.727116   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:45.850385   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:45.850861   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:46.220497   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:46.287521   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:46.721018   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:46.787618   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:47.220859   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:47.287828   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:47.733565   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:47.788501   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:48.220897   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:48.287242   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:48.722270   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:48.788731   18382 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 19:54:49.222517   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:49.323716   18382 kapi.go:107] duration metric: took 1m12.04037086s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 19:54:49.721726   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:50.222093   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:50.721787   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:51.221050   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:51.720902   18382 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 19:54:52.220885   18382 kapi.go:107] duration metric: took 1m12.503353084s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 19:54:52.222316   18382 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-153447 cluster.
	I1204 19:54:52.223620   18382 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 19:54:52.225054   18382 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 19:54:52.226490   18382 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, inspektor-gadget, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1204 19:54:52.227818   18382 addons.go:510] duration metric: took 1m23.877592502s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server inspektor-gadget storage-provisioner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1204 19:54:52.227857   18382 start.go:246] waiting for cluster config update ...
	I1204 19:54:52.227875   18382 start.go:255] writing updated cluster config ...
	I1204 19:54:52.228096   18382 ssh_runner.go:195] Run: rm -f paused
	I1204 19:54:52.279506   18382 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 19:54:52.281406   18382 out.go:177] * Done! kubectl is now configured to use "addons-153447" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.587718211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342481587676349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcd0720d-4c87-4690-aae1-ab38fd67b03b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.588694859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b022afd-5654-455a-8e8f-6d3d9c664318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.588772484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b022afd-5654-455a-8e8f-6d3d9c664318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.589178409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbabb07ce1d0b7e18b6244da0a8e0013ef96b8d5aed01abcd8ca87e07d3698b3,PodSandboxId:30bdd32f37de5455cb0a1c18404884c985bce088540d20a9664d56520e3c1f60,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733342331125858103,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvjlq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0d0101-4798-4b24-83fe-19eb2feea818,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a
820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72fe25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe64458a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b022afd-5654-455a-8e8f-6d3d9c664318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.624073186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18b03ac4-2437-40eb-a867-c4529b9a1f6b name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.624142497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18b03ac4-2437-40eb-a867-c4529b9a1f6b name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.625257837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c74d997-c1ba-4c15-a1f0-e1900716044a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.626430293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342481626410205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c74d997-c1ba-4c15-a1f0-e1900716044a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.626892521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88d3da10-54e7-47f4-a2c3-8b0f4fce695e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.626942340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88d3da10-54e7-47f4-a2c3-8b0f4fce695e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.627222646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbabb07ce1d0b7e18b6244da0a8e0013ef96b8d5aed01abcd8ca87e07d3698b3,PodSandboxId:30bdd32f37de5455cb0a1c18404884c985bce088540d20a9664d56520e3c1f60,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733342331125858103,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvjlq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0d0101-4798-4b24-83fe-19eb2feea818,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a
820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72fe25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe64458a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88d3da10-54e7-47f4-a2c3-8b0f4fce695e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.661216640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=453bebaf-789a-417c-9065-e279ff79b4ce name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.661331441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=453bebaf-789a-417c-9065-e279ff79b4ce name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.662437781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=218245f2-029c-40c4-8854-065c89288d89 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.663779829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342481663758132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=218245f2-029c-40c4-8854-065c89288d89 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.664228836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83a56b35-6e94-4658-aa5e-b0601d24ee44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.664332639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83a56b35-6e94-4658-aa5e-b0601d24ee44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.664774603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbabb07ce1d0b7e18b6244da0a8e0013ef96b8d5aed01abcd8ca87e07d3698b3,PodSandboxId:30bdd32f37de5455cb0a1c18404884c985bce088540d20a9664d56520e3c1f60,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733342331125858103,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvjlq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0d0101-4798-4b24-83fe-19eb2feea818,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a
820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72fe25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe64458a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83a56b35-6e94-4658-aa5e-b0601d24ee44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.696036156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bed60f96-1007-4e48-840d-477f654ee120 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.696128527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bed60f96-1007-4e48-840d-477f654ee120 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.697178915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f102ac29-97ef-4868-a6ba-734b74745923 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.698658398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342481698627494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f102ac29-97ef-4868-a6ba-734b74745923 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.699320693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aad4a4ed-9107-4fa2-bfb4-128e1efe58d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.699378586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aad4a4ed-9107-4fa2-bfb4-128e1efe58d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:01:21 addons-153447 crio[661]: time="2024-12-04 20:01:21.699688223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbabb07ce1d0b7e18b6244da0a8e0013ef96b8d5aed01abcd8ca87e07d3698b3,PodSandboxId:30bdd32f37de5455cb0a1c18404884c985bce088540d20a9664d56520e3c1f60,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733342331125858103,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvjlq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d0d0101-4798-4b24-83fe-19eb2feea818,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a998b51456d7df3e26e7d14db3da9e2b5347d3c277997fcfa50ef64778435b,PodSandboxId:98d88213686a06ac2448aeba289440f615d9bc446c550084aab46c7a56baa1ac,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733342190242336220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13f1323b-f52e-49ea-b039-e6312cb1e3a8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98a6653243f4cdaeec8f1241f1fe88776c16b26e8ac7966525a5e802eb791e5e,PodSandboxId:494d9fa01ba2798ca935194f6e3350b33fef5fcce2098096b9a761f8aa986dd1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733342095451808238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d848bd2e-9b52-4694-a
820-ad62fd4c3be4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07dfedc1ada68a174170207a2615403bbe1965fd4ea4a26877db9309fbad342,PodSandboxId:55adec096f97817324630133ab6b3de471d86a056a46d86b0cd183e7248fd92f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733342054722938154,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpnml,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3e5584b2-5c1f-4acb-93d3-614ecdb4794c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0581d9d4c53cf5e2358826e93c8571bd24369b7acefc41a1dbc647af6449002c,PodSandboxId:6a51138af00ec9bf05a041cb4e45f73e3608d408e9bb25194998411429ad39b7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733342026991084708,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7r8d9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe74ca1b-56c6-4e61-8ec2-380d38f63b82,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa,PodSandboxId:2d3fcf3574200b72fe25f470035d3a5e11ce339cdd3ed8f358080bf9e4e3f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733342015186733392,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa71a22c-f55d-460d-b2cc-7aa569c3badc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33,PodSandboxId:78dcffa5b119fa11384ea06fe64458a2dd5b36bb628879adb691da688c1307b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342012665469779,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mq69t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc725230-25f4-41a8-8292-110a5d46949e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897,PodSandboxId:66fd6ef2f8a7f7f8a4086dafd7f8c722b57e4b398e65aa6f6bc3a541c30bf483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342010451233797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c194c0d0-590f-41dc-9ca2-83e611918692,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41,PodSandboxId:a82f6435852f1e06d5eeda9a3f88697f48a23b96ae5a93954debe25bcda06fd7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733341998587434186,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8091e734d2e8f14a2640ddb84c9423f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317,PodSandboxId:52a32abb1b2e493157f0f6e2b1384c70732c58f062ff051cf344dfc6e6b17344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733341998561399000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dac8d3f79be5cd845ac5403f9a23e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3,PodSandboxId:51e0888cf7dc0904f031aaa3cc6adb99f0614c63c90e72d086bb3f2dd20ffc55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733341998586009484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83421709f81f03fcb932ca7e3849403e,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9,PodSandboxId:b645c9e24232854ffab6ceefc3c13bf33a41e59df36ccf0880e8a69b9e4d091d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733341998558003629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e82a001b021d128f7692896be19270c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aad4a4ed-9107-4fa2-bfb4-128e1efe58d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fbabb07ce1d0b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   30bdd32f37de5       hello-world-app-55bf9c44b4-lvjlq
	99a998b51456d       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago       Running             nginx                     0                   98d88213686a0       nginx
	98a6653243f4c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   494d9fa01ba27       busybox
	c07dfedc1ada6       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   55adec096f978       metrics-server-84c5f94fbc-gpnml
	0581d9d4c53cf       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   6a51138af00ec       amd-gpu-device-plugin-7r8d9
	391f85cfe8644       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   2d3fcf3574200       storage-provisioner
	fbdb1435874e2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   78dcffa5b119f       coredns-7c65d6cfc9-mq69t
	9fb3e6fbdfcdc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   66fd6ef2f8a7f       kube-proxy-zf92b
	0968f9cd07b6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   a82f6435852f1       etcd-addons-153447
	58bddd2348673       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   51e0888cf7dc0       kube-scheduler-addons-153447
	03d71ccb7c47f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   52a32abb1b2e4       kube-controller-manager-addons-153447
	ed3ce6a0cfea9       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   b645c9e242328       kube-apiserver-addons-153447
	
	
	==> coredns [fbdb1435874e254c8867445c82450eee98f51936cf174414d00ed45f26e69f33] <==
	[INFO] 10.244.0.22:33220 - 27365 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102968s
	[INFO] 10.244.0.22:57104 - 64960 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000252838s
	[INFO] 10.244.0.22:33220 - 11872 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000124388s
	[INFO] 10.244.0.22:57104 - 43234 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000203342s
	[INFO] 10.244.0.22:33220 - 28506 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096022s
	[INFO] 10.244.0.22:57104 - 53301 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000214576s
	[INFO] 10.244.0.22:33220 - 51641 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000147234s
	[INFO] 10.244.0.22:57104 - 46640 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000218535s
	[INFO] 10.244.0.22:57104 - 43593 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099578s
	[INFO] 10.244.0.22:33220 - 58451 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000566023s
	[INFO] 10.244.0.22:57104 - 13293 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079027s
	[INFO] 10.244.0.22:36874 - 52160 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000238375s
	[INFO] 10.244.0.22:54659 - 8822 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000219917s
	[INFO] 10.244.0.22:54659 - 9527 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010181s
	[INFO] 10.244.0.22:36874 - 46258 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000246426s
	[INFO] 10.244.0.22:36874 - 38638 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082373s
	[INFO] 10.244.0.22:54659 - 2003 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000167658s
	[INFO] 10.244.0.22:54659 - 48845 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000143589s
	[INFO] 10.244.0.22:36874 - 37951 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049156s
	[INFO] 10.244.0.22:36874 - 60283 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041381s
	[INFO] 10.244.0.22:54659 - 18049 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000212774s
	[INFO] 10.244.0.22:54659 - 18248 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034065s
	[INFO] 10.244.0.22:36874 - 54926 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00018802s
	[INFO] 10.244.0.22:36874 - 60235 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110881s
	[INFO] 10.244.0.22:54659 - 56185 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000284134s
	
	
	==> describe nodes <==
	Name:               addons-153447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-153447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=addons-153447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T19_53_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153447
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 19:53:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153447
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:01:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 19:59:01 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 19:59:01 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 19:59:01 +0000   Wed, 04 Dec 2024 19:53:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 19:59:01 +0000   Wed, 04 Dec 2024 19:53:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-153447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e0a87a584543abac0cd1c84dc4aae2
	  System UUID:                96e0a87a-5845-43ab-ac0c-d1c84dc4aae2
	  Boot ID:                    f46eeac8-29cc-4f15-9c3f-b9f5c9897c18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  default                     hello-world-app-55bf9c44b4-lvjlq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 amd-gpu-device-plugin-7r8d9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 coredns-7c65d6cfc9-mq69t                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m52s
	  kube-system                 etcd-addons-153447                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m58s
	  kube-system                 kube-apiserver-addons-153447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-controller-manager-addons-153447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-proxy-zf92b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-scheduler-addons-153447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 metrics-server-84c5f94fbc-gpnml          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m48s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m4s (x8 over 8m4s)  kubelet          Node addons-153447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m4s (x8 over 8m4s)  kubelet          Node addons-153447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m4s (x7 over 8m4s)  kubelet          Node addons-153447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m58s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m58s                kubelet          Node addons-153447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s                kubelet          Node addons-153447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s                kubelet          Node addons-153447 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m57s                kubelet          Node addons-153447 status is now: NodeReady
	  Normal  RegisteredNode           7m53s                node-controller  Node addons-153447 event: Registered Node addons-153447 in Controller
	
	
	==> dmesg <==
	[  +4.801267] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +1.185893] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.005668] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.014799] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.230337] kauditd_printk_skb: 77 callbacks suppressed
	[Dec 4 19:54] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.039477] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.511486] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.086826] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.229832] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.758415] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.098105] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.147270] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.296493] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 4 19:55] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.786172] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.905242] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.667032] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.404741] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 4 19:56] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.763557] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.881895] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.238790] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.382791] kauditd_printk_skb: 17 callbacks suppressed
	[Dec 4 19:58] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0968f9cd07b6cb4badcf25f2c35a7416e0d6ae2e8db19d3ccebfbb94d22edb41] <==
	{"level":"warn","ts":"2024-12-04T19:55:58.624220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"440.175697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-12-04T19:55:58.624235Z","caller":"traceutil/trace.go:171","msg":"trace[378325215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1545; }","duration":"440.189256ms","start":"2024-12-04T19:55:58.184041Z","end":"2024-12-04T19:55:58.624230Z","steps":["trace[378325215] 'agreement among raft nodes before linearized reading'  (duration: 440.122944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624248Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.184002Z","time spent":"440.242257ms","remote":"127.0.0.1:57812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.381134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:55:58.624518Z","caller":"traceutil/trace.go:171","msg":"trace[893166183] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:1545; }","duration":"358.403732ms","start":"2024-12-04T19:55:58.266107Z","end":"2024-12-04T19:55:58.624511Z","steps":["trace[893166183] 'agreement among raft nodes before linearized reading'  (duration: 358.369968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.266069Z","time spent":"358.471066ms","remote":"127.0.0.1:57768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":85,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.523255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:55:58.624663Z","caller":"traceutil/trace.go:171","msg":"trace[345604867] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1545; }","duration":"386.538798ms","start":"2024-12-04T19:55:58.238120Z","end":"2024-12-04T19:55:58.624658Z","steps":["trace[345604867] 'agreement among raft nodes before linearized reading'  (duration: 386.514273ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.238074Z","time spent":"386.600035ms","remote":"127.0.0.1:57584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-04T19:55:58.624788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.943353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-04T19:55:58.624802Z","caller":"traceutil/trace.go:171","msg":"trace[21098876] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1545; }","duration":"416.961539ms","start":"2024-12-04T19:55:58.207836Z","end":"2024-12-04T19:55:58.624797Z","steps":["trace[21098876] 'agreement among raft nodes before linearized reading'  (duration: 416.894676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:55:58.624814Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T19:55:58.207787Z","time spent":"417.02409ms","remote":"127.0.0.1:57812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"info","ts":"2024-12-04T19:56:23.104048Z","caller":"traceutil/trace.go:171","msg":"trace[1313973066] linearizableReadLoop","detail":"{readStateIndex:1851; appliedIndex:1850; }","duration":"102.314883ms","start":"2024-12-04T19:56:23.001718Z","end":"2024-12-04T19:56:23.104033Z","steps":["trace[1313973066] 'read index received'  (duration: 102.194335ms)","trace[1313973066] 'applied index is now lower than readState.Index'  (duration: 119.944µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T19:56:23.104557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.822597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-153447\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2024-12-04T19:56:23.104588Z","caller":"traceutil/trace.go:171","msg":"trace[547629556] range","detail":"{range_begin:/registry/csinodes/addons-153447; range_end:; response_count:1; response_revision:1786; }","duration":"102.864494ms","start":"2024-12-04T19:56:23.001715Z","end":"2024-12-04T19:56:23.104579Z","steps":["trace[547629556] 'agreement among raft nodes before linearized reading'  (duration: 102.735206ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.313638Z","caller":"traceutil/trace.go:171","msg":"trace[636958666] transaction","detail":"{read_only:false; response_revision:1787; number_of_response:1; }","duration":"207.432244ms","start":"2024-12-04T19:56:23.106187Z","end":"2024-12-04T19:56:23.313619Z","steps":["trace[636958666] 'process raft request'  (duration: 201.205601ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.323930Z","caller":"traceutil/trace.go:171","msg":"trace[430165809] transaction","detail":"{read_only:false; response_revision:1788; number_of_response:1; }","duration":"215.568698ms","start":"2024-12-04T19:56:23.108353Z","end":"2024-12-04T19:56:23.323921Z","steps":["trace[430165809] 'process raft request'  (duration: 205.21506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:56:23.327045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.858924ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:56:23.327089Z","caller":"traceutil/trace.go:171","msg":"trace[1611073847] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1788; }","duration":"212.90997ms","start":"2024-12-04T19:56:23.114169Z","end":"2024-12-04T19:56:23.327079Z","steps":["trace[1611073847] 'agreement among raft nodes before linearized reading'  (duration: 212.802582ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:56:23.326919Z","caller":"traceutil/trace.go:171","msg":"trace[1636379888] linearizableReadLoop","detail":"{readStateIndex:1853; appliedIndex:1851; }","duration":"209.661667ms","start":"2024-12-04T19:56:23.114175Z","end":"2024-12-04T19:56:23.323837Z","steps":["trace[1636379888] 'read index received'  (duration: 193.225848ms)","trace[1636379888] 'applied index is now lower than readState.Index'  (duration: 16.434772ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T19:56:23.328493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.923746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-12-04T19:56:23.328528Z","caller":"traceutil/trace.go:171","msg":"trace[684763124] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-attacher-0; range_end:; response_count:1; response_revision:1788; }","duration":"202.97352ms","start":"2024-12-04T19:56:23.125542Z","end":"2024-12-04T19:56:23.328516Z","steps":["trace[684763124] 'agreement among raft nodes before linearized reading'  (duration: 202.843619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T19:56:23.328739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.816203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-health-monitor-controller-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T19:56:23.328757Z","caller":"traceutil/trace.go:171","msg":"trace[1614675955] range","detail":"{range_begin:/registry/clusterroles/external-health-monitor-controller-runner; range_end:; response_count:0; response_revision:1788; }","duration":"149.837741ms","start":"2024-12-04T19:56:23.178913Z","end":"2024-12-04T19:56:23.328751Z","steps":["trace[1614675955] 'agreement among raft nodes before linearized reading'  (duration: 149.805155ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T19:57:04.123669Z","caller":"traceutil/trace.go:171","msg":"trace[629270860] transaction","detail":"{read_only:false; response_revision:1872; number_of_response:1; }","duration":"130.985911ms","start":"2024-12-04T19:57:03.992648Z","end":"2024-12-04T19:57:04.123634Z","steps":["trace[629270860] 'process raft request'  (duration: 130.56555ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:01:22 up 8 min,  0 users,  load average: 0.15, 0.58, 0.44
	Linux addons-153447 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ed3ce6a0cfea9424c54654b2d37df2f31443f49f9dffc15f58c253ddfe1c1ed9] <==
	E1204 19:55:43.247635       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1204 19:55:43.256706       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1204 19:55:43.264429       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1204 19:55:57.061883       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.13.6"}
	E1204 19:55:58.626936       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1204 19:56:06.898018       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1204 19:56:20.960351       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:20.960435       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:20.981151       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:20.981267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.009622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.009724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.048318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.048767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 19:56:21.118524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 19:56:21.118619       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1204 19:56:22.049481       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1204 19:56:22.122720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1204 19:56:22.130521       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1204 19:56:22.651035       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 19:56:23.697731       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1204 19:56:26.037839       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 19:56:26.208364       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.14.133"}
	I1204 19:58:48.684446       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.155.118"}
	E1204 19:58:51.959989       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [03d71ccb7c47ff2ecafe51cb6ed263134035f4bb51a45a8f25b95e6a5d8bb317] <==
	E1204 19:59:21.470168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:59:24.577664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:59:24.577718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:59:33.885517       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:59:33.885552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 19:59:53.121753       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 19:59:53.121932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:03.708048       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:03.708160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:24.548256       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:24.548502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:25.712466       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:25.712546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:25.839466       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:25.839579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:42.869101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:42.869201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:57.473795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:57.473994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:00:58.125178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:00:58.125243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:01:07.868193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:01:07.868429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 20:01:20.705605       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 20:01:20.705872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9fb3e6fbdfcdc78ea16acfec66223a75386aa0b25179acdc030c8328f1ec1897] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 19:53:32.015228       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 19:53:32.046237       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E1204 19:53:32.046346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 19:53:32.383994       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 19:53:32.384077       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 19:53:32.384108       1 server_linux.go:169] "Using iptables Proxier"
	I1204 19:53:32.424526       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 19:53:32.424833       1 server.go:483] "Version info" version="v1.31.2"
	I1204 19:53:32.424859       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 19:53:32.467227       1 config.go:199] "Starting service config controller"
	I1204 19:53:32.467259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 19:53:32.467342       1 config.go:105] "Starting endpoint slice config controller"
	I1204 19:53:32.467348       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 19:53:32.467693       1 config.go:328] "Starting node config controller"
	I1204 19:53:32.467704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 19:53:32.569386       1 shared_informer.go:320] Caches are synced for node config
	I1204 19:53:32.569453       1 shared_informer.go:320] Caches are synced for service config
	I1204 19:53:32.569499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [58bddd2348673b2e584431b273d589726112993e08800620927b688f0af8bdb3] <==
	W1204 19:53:21.338704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:21.338729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:21.338788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 19:53:21.338814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:21.338864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:21.338889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.149716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 19:53:22.149769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.161177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.161360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.251420       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 19:53:22.251469       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 19:53:22.298007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 19:53:22.298062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.339430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1204 19:53:22.339483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.415479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 19:53:22.415531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.489679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.489723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.558149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 19:53:22.558249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 19:53:22.616848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 19:53:22.616950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 19:53:24.730858       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 20:00:04 addons-153447 kubelet[1201]: E1204 20:00:04.151952    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342404148546032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:04 addons-153447 kubelet[1201]: E1204 20:00:04.152218    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342404148546032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:10 addons-153447 kubelet[1201]: I1204 20:00:10.873713    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-mq69t" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 20:00:14 addons-153447 kubelet[1201]: E1204 20:00:14.155456    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342414154831457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:14 addons-153447 kubelet[1201]: E1204 20:00:14.155553    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342414154831457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:23 addons-153447 kubelet[1201]: E1204 20:00:23.911147    1201 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:00:23 addons-153447 kubelet[1201]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:00:23 addons-153447 kubelet[1201]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:00:23 addons-153447 kubelet[1201]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:00:23 addons-153447 kubelet[1201]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:00:24 addons-153447 kubelet[1201]: E1204 20:00:24.158093    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342424157442414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:24 addons-153447 kubelet[1201]: E1204 20:00:24.158254    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342424157442414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:34 addons-153447 kubelet[1201]: E1204 20:00:34.161534    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342434161021175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:34 addons-153447 kubelet[1201]: E1204 20:00:34.161576    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342434161021175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:44 addons-153447 kubelet[1201]: E1204 20:00:44.165461    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342444164860716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:44 addons-153447 kubelet[1201]: E1204 20:00:44.165868    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342444164860716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:54 addons-153447 kubelet[1201]: E1204 20:00:54.168937    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342454168530949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:54 addons-153447 kubelet[1201]: E1204 20:00:54.169226    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342454168530949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:00:57 addons-153447 kubelet[1201]: I1204 20:00:57.874127    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 20:01:03 addons-153447 kubelet[1201]: I1204 20:01:03.874837    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7r8d9" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 20:01:04 addons-153447 kubelet[1201]: E1204 20:01:04.172066    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342464171701064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:01:04 addons-153447 kubelet[1201]: E1204 20:01:04.172317    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342464171701064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:01:11 addons-153447 kubelet[1201]: I1204 20:01:11.873679    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-mq69t" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 20:01:14 addons-153447 kubelet[1201]: E1204 20:01:14.175027    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342474174555849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:01:14 addons-153447 kubelet[1201]: E1204 20:01:14.175069    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733342474174555849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604507,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [391f85cfe864475a0c2d61369bbd41290d22a8e75ac3329164db31b55ba11afa] <==
	I1204 19:53:35.729036       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 19:53:35.752925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 19:53:35.753000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 19:53:35.768392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 19:53:35.768529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f!
	I1204 19:53:35.769517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6df663f4-aede-41c0-9e65-236b97d0f25b", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f became leader
	I1204 19:53:35.869508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-153447_886ca161-2173-49f8-a3fe-a86bb44a324f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153447 -n addons-153447
helpers_test.go:261: (dbg) Run:  kubectl --context addons-153447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (332.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-153447
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-153447: exit status 82 (2m0.470171522s)

                                                
                                                
-- stdout --
	* Stopping node "addons-153447"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-153447" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153447
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-153447: exit status 11 (21.684162295s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-153447" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153447
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-153447: exit status 11 (6.143493763s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-153447" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-153447
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-153447: exit status 11 (6.143582075s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-153447" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 node stop m02 -v=7 --alsologtostderr
E1204 20:12:46.770535   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:13:07.252647   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:13:48.214419   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-739930 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.452058609s)

                                                
                                                
-- stdout --
	* Stopping node "ha-739930-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:12:45.459788   31967 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:12:45.459925   31967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:12:45.459935   31967 out.go:358] Setting ErrFile to fd 2...
	I1204 20:12:45.459942   31967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:12:45.460115   31967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:12:45.460371   31967 mustload.go:65] Loading cluster: ha-739930
	I1204 20:12:45.460768   31967 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:12:45.460789   31967 stop.go:39] StopHost: ha-739930-m02
	I1204 20:12:45.461199   31967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:12:45.461247   31967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:12:45.475775   31967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37059
	I1204 20:12:45.476279   31967 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:12:45.476869   31967 main.go:141] libmachine: Using API Version  1
	I1204 20:12:45.476896   31967 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:12:45.477237   31967 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:12:45.479215   31967 out.go:177] * Stopping node "ha-739930-m02"  ...
	I1204 20:12:45.480534   31967 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 20:12:45.480566   31967 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:12:45.480772   31967 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 20:12:45.480798   31967 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:12:45.483145   31967 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:12:45.483551   31967 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:12:45.483574   31967 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:12:45.483704   31967 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:12:45.483866   31967 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:12:45.484001   31967 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:12:45.484149   31967 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:12:45.566494   31967 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 20:12:45.619559   31967 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 20:12:45.672891   31967 main.go:141] libmachine: Stopping "ha-739930-m02"...
	I1204 20:12:45.672920   31967 main.go:141] libmachine: (ha-739930-m02) Calling .GetState
	I1204 20:12:45.674318   31967 main.go:141] libmachine: (ha-739930-m02) Calling .Stop
	I1204 20:12:45.677896   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 0/120
	I1204 20:12:46.679111   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 1/120
	I1204 20:12:47.680588   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 2/120
	I1204 20:12:48.681796   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 3/120
	I1204 20:12:49.683206   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 4/120
	I1204 20:12:50.684509   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 5/120
	I1204 20:12:51.685564   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 6/120
	I1204 20:12:52.686862   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 7/120
	I1204 20:12:53.688085   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 8/120
	I1204 20:12:54.689475   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 9/120
	I1204 20:12:55.691615   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 10/120
	I1204 20:12:56.693821   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 11/120
	I1204 20:12:57.695261   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 12/120
	I1204 20:12:58.696899   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 13/120
	I1204 20:12:59.698101   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 14/120
	I1204 20:13:00.699817   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 15/120
	I1204 20:13:01.701882   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 16/120
	I1204 20:13:02.703080   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 17/120
	I1204 20:13:03.704450   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 18/120
	I1204 20:13:04.705570   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 19/120
	I1204 20:13:05.707697   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 20/120
	I1204 20:13:06.709739   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 21/120
	I1204 20:13:07.711146   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 22/120
	I1204 20:13:08.713544   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 23/120
	I1204 20:13:09.714943   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 24/120
	I1204 20:13:10.716321   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 25/120
	I1204 20:13:11.717734   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 26/120
	I1204 20:13:12.720111   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 27/120
	I1204 20:13:13.721836   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 28/120
	I1204 20:13:14.722929   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 29/120
	I1204 20:13:15.724761   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 30/120
	I1204 20:13:16.725992   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 31/120
	I1204 20:13:17.727607   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 32/120
	I1204 20:13:18.729015   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 33/120
	I1204 20:13:19.731627   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 34/120
	I1204 20:13:20.733240   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 35/120
	I1204 20:13:21.734739   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 36/120
	I1204 20:13:22.736866   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 37/120
	I1204 20:13:23.738262   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 38/120
	I1204 20:13:24.740333   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 39/120
	I1204 20:13:25.742306   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 40/120
	I1204 20:13:26.744383   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 41/120
	I1204 20:13:27.745878   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 42/120
	I1204 20:13:28.747114   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 43/120
	I1204 20:13:29.748540   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 44/120
	I1204 20:13:30.750460   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 45/120
	I1204 20:13:31.751817   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 46/120
	I1204 20:13:32.753146   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 47/120
	I1204 20:13:33.754347   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 48/120
	I1204 20:13:34.755734   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 49/120
	I1204 20:13:35.757771   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 50/120
	I1204 20:13:36.759759   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 51/120
	I1204 20:13:37.761831   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 52/120
	I1204 20:13:38.763325   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 53/120
	I1204 20:13:39.765251   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 54/120
	I1204 20:13:40.766522   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 55/120
	I1204 20:13:41.768305   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 56/120
	I1204 20:13:42.770026   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 57/120
	I1204 20:13:43.771485   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 58/120
	I1204 20:13:44.772738   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 59/120
	I1204 20:13:45.774265   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 60/120
	I1204 20:13:46.775661   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 61/120
	I1204 20:13:47.776883   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 62/120
	I1204 20:13:48.778403   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 63/120
	I1204 20:13:49.780415   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 64/120
	I1204 20:13:50.781803   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 65/120
	I1204 20:13:51.783030   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 66/120
	I1204 20:13:52.784264   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 67/120
	I1204 20:13:53.786270   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 68/120
	I1204 20:13:54.787527   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 69/120
	I1204 20:13:55.789369   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 70/120
	I1204 20:13:56.790703   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 71/120
	I1204 20:13:57.791989   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 72/120
	I1204 20:13:58.793909   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 73/120
	I1204 20:13:59.795347   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 74/120
	I1204 20:14:00.796700   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 75/120
	I1204 20:14:01.798029   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 76/120
	I1204 20:14:02.799258   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 77/120
	I1204 20:14:03.800518   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 78/120
	I1204 20:14:04.801772   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 79/120
	I1204 20:14:05.803401   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 80/120
	I1204 20:14:06.804726   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 81/120
	I1204 20:14:07.806543   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 82/120
	I1204 20:14:08.808056   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 83/120
	I1204 20:14:09.809311   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 84/120
	I1204 20:14:10.810696   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 85/120
	I1204 20:14:11.812156   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 86/120
	I1204 20:14:12.813765   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 87/120
	I1204 20:14:13.815059   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 88/120
	I1204 20:14:14.817040   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 89/120
	I1204 20:14:15.818778   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 90/120
	I1204 20:14:16.820130   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 91/120
	I1204 20:14:17.821520   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 92/120
	I1204 20:14:18.823189   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 93/120
	I1204 20:14:19.824483   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 94/120
	I1204 20:14:20.826428   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 95/120
	I1204 20:14:21.827790   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 96/120
	I1204 20:14:22.829890   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 97/120
	I1204 20:14:23.831462   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 98/120
	I1204 20:14:24.832620   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 99/120
	I1204 20:14:25.834485   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 100/120
	I1204 20:14:26.835747   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 101/120
	I1204 20:14:27.837707   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 102/120
	I1204 20:14:28.839723   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 103/120
	I1204 20:14:29.841046   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 104/120
	I1204 20:14:30.842388   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 105/120
	I1204 20:14:31.843721   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 106/120
	I1204 20:14:32.845886   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 107/120
	I1204 20:14:33.847122   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 108/120
	I1204 20:14:34.848530   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 109/120
	I1204 20:14:35.850593   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 110/120
	I1204 20:14:36.851873   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 111/120
	I1204 20:14:37.853273   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 112/120
	I1204 20:14:38.854610   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 113/120
	I1204 20:14:39.855956   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 114/120
	I1204 20:14:40.857766   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 115/120
	I1204 20:14:41.859174   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 116/120
	I1204 20:14:42.861026   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 117/120
	I1204 20:14:43.862348   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 118/120
	I1204 20:14:44.863776   31967 main.go:141] libmachine: (ha-739930-m02) Waiting for machine to stop 119/120
	I1204 20:14:45.864588   31967 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 20:14:45.864782   31967 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-739930 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
E1204 20:14:52.903590   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr: (18.636244761s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.347093031s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m03_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:08:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:08:11.939431   27912 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:08:11.939545   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939555   27912 out.go:358] Setting ErrFile to fd 2...
	I1204 20:08:11.939562   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939744   27912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:08:11.940314   27912 out.go:352] Setting JSON to false
	I1204 20:08:11.941189   27912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3042,"bootTime":1733339850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:08:11.941293   27912 start.go:139] virtualization: kvm guest
	I1204 20:08:11.944336   27912 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:08:11.945852   27912 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:08:11.945847   27912 notify.go:220] Checking for updates...
	I1204 20:08:11.948662   27912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:08:11.950105   27912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:11.951395   27912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:11.952616   27912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:08:11.953838   27912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:08:11.955060   27912 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:08:11.990494   27912 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 20:08:11.991825   27912 start.go:297] selected driver: kvm2
	I1204 20:08:11.991844   27912 start.go:901] validating driver "kvm2" against <nil>
	I1204 20:08:11.991856   27912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:08:11.992661   27912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:11.992744   27912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:08:12.008005   27912 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:08:12.008178   27912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 20:08:12.008532   27912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:08:12.008571   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:12.008627   27912 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 20:08:12.008639   27912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 20:08:12.008710   27912 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:12.008840   27912 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:12.010621   27912 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:08:12.011905   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:12.011946   27912 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:08:12.011958   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:12.012045   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:12.012061   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:12.012439   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:12.012463   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json: {Name:mk7402f769abcec1c18cda99e23fa60ffac7b3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:12.012602   27912 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:12.012630   27912 start.go:364] duration metric: took 16.073µs to acquireMachinesLock for "ha-739930"
	I1204 20:08:12.012648   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:12.012705   27912 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 20:08:12.014265   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:12.014396   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:12.014435   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:12.028697   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I1204 20:08:12.029103   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:12.029651   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:12.029671   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:12.029950   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:12.030110   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:12.030242   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:12.030391   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:12.030413   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:12.030437   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:12.030469   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030485   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030532   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:12.030550   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030563   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030580   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:12.030594   27912 main.go:141] libmachine: (ha-739930) Calling .PreCreateCheck
	I1204 20:08:12.030896   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:12.031303   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:12.031315   27912 main.go:141] libmachine: (ha-739930) Calling .Create
	I1204 20:08:12.031447   27912 main.go:141] libmachine: (ha-739930) Creating KVM machine...
	I1204 20:08:12.032790   27912 main.go:141] libmachine: (ha-739930) DBG | found existing default KVM network
	I1204 20:08:12.033408   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.033271   27935 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I1204 20:08:12.033431   27912 main.go:141] libmachine: (ha-739930) DBG | created network xml: 
	I1204 20:08:12.033442   27912 main.go:141] libmachine: (ha-739930) DBG | <network>
	I1204 20:08:12.033450   27912 main.go:141] libmachine: (ha-739930) DBG |   <name>mk-ha-739930</name>
	I1204 20:08:12.033465   27912 main.go:141] libmachine: (ha-739930) DBG |   <dns enable='no'/>
	I1204 20:08:12.033475   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033484   27912 main.go:141] libmachine: (ha-739930) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 20:08:12.033497   27912 main.go:141] libmachine: (ha-739930) DBG |     <dhcp>
	I1204 20:08:12.033526   27912 main.go:141] libmachine: (ha-739930) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 20:08:12.033560   27912 main.go:141] libmachine: (ha-739930) DBG |     </dhcp>
	I1204 20:08:12.033571   27912 main.go:141] libmachine: (ha-739930) DBG |   </ip>
	I1204 20:08:12.033582   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033602   27912 main.go:141] libmachine: (ha-739930) DBG | </network>
	I1204 20:08:12.033619   27912 main.go:141] libmachine: (ha-739930) DBG | 
	I1204 20:08:12.038715   27912 main.go:141] libmachine: (ha-739930) DBG | trying to create private KVM network mk-ha-739930 192.168.39.0/24...
	I1204 20:08:12.104228   27912 main.go:141] libmachine: (ha-739930) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.104263   27912 main.go:141] libmachine: (ha-739930) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:12.104273   27912 main.go:141] libmachine: (ha-739930) DBG | private KVM network mk-ha-739930 192.168.39.0/24 created
	I1204 20:08:12.104290   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.104148   27935 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.104318   27912 main.go:141] libmachine: (ha-739930) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:12.357869   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.357760   27935 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa...
	I1204 20:08:12.476934   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476798   27935 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk...
	I1204 20:08:12.476961   27912 main.go:141] libmachine: (ha-739930) DBG | Writing magic tar header
	I1204 20:08:12.476973   27912 main.go:141] libmachine: (ha-739930) DBG | Writing SSH key tar header
	I1204 20:08:12.476980   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476911   27935 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.476989   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930
	I1204 20:08:12.477071   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:12.477126   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 (perms=drwx------)
	I1204 20:08:12.477140   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.477159   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:12.477173   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:12.477183   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:12.477188   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home
	I1204 20:08:12.477199   27912 main.go:141] libmachine: (ha-739930) DBG | Skipping /home - not owner
	I1204 20:08:12.477241   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:12.477265   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:12.477280   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:12.477294   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:12.477311   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:12.477322   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:12.478077   27912 main.go:141] libmachine: (ha-739930) define libvirt domain using xml: 
	I1204 20:08:12.478098   27912 main.go:141] libmachine: (ha-739930) <domain type='kvm'>
	I1204 20:08:12.478108   27912 main.go:141] libmachine: (ha-739930)   <name>ha-739930</name>
	I1204 20:08:12.478120   27912 main.go:141] libmachine: (ha-739930)   <memory unit='MiB'>2200</memory>
	I1204 20:08:12.478128   27912 main.go:141] libmachine: (ha-739930)   <vcpu>2</vcpu>
	I1204 20:08:12.478137   27912 main.go:141] libmachine: (ha-739930)   <features>
	I1204 20:08:12.478144   27912 main.go:141] libmachine: (ha-739930)     <acpi/>
	I1204 20:08:12.478153   27912 main.go:141] libmachine: (ha-739930)     <apic/>
	I1204 20:08:12.478159   27912 main.go:141] libmachine: (ha-739930)     <pae/>
	I1204 20:08:12.478166   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478176   27912 main.go:141] libmachine: (ha-739930)   </features>
	I1204 20:08:12.478183   27912 main.go:141] libmachine: (ha-739930)   <cpu mode='host-passthrough'>
	I1204 20:08:12.478254   27912 main.go:141] libmachine: (ha-739930)   
	I1204 20:08:12.478278   27912 main.go:141] libmachine: (ha-739930)   </cpu>
	I1204 20:08:12.478290   27912 main.go:141] libmachine: (ha-739930)   <os>
	I1204 20:08:12.478313   27912 main.go:141] libmachine: (ha-739930)     <type>hvm</type>
	I1204 20:08:12.478326   27912 main.go:141] libmachine: (ha-739930)     <boot dev='cdrom'/>
	I1204 20:08:12.478335   27912 main.go:141] libmachine: (ha-739930)     <boot dev='hd'/>
	I1204 20:08:12.478344   27912 main.go:141] libmachine: (ha-739930)     <bootmenu enable='no'/>
	I1204 20:08:12.478354   27912 main.go:141] libmachine: (ha-739930)   </os>
	I1204 20:08:12.478361   27912 main.go:141] libmachine: (ha-739930)   <devices>
	I1204 20:08:12.478371   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='cdrom'>
	I1204 20:08:12.478384   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/boot2docker.iso'/>
	I1204 20:08:12.478394   27912 main.go:141] libmachine: (ha-739930)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:12.478401   27912 main.go:141] libmachine: (ha-739930)       <readonly/>
	I1204 20:08:12.478416   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478430   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='disk'>
	I1204 20:08:12.478442   27912 main.go:141] libmachine: (ha-739930)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:12.478457   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk'/>
	I1204 20:08:12.478467   27912 main.go:141] libmachine: (ha-739930)       <target dev='hda' bus='virtio'/>
	I1204 20:08:12.478475   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478490   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478503   27912 main.go:141] libmachine: (ha-739930)       <source network='mk-ha-739930'/>
	I1204 20:08:12.478512   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478520   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478530   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478542   27912 main.go:141] libmachine: (ha-739930)       <source network='default'/>
	I1204 20:08:12.478552   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478599   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478617   27912 main.go:141] libmachine: (ha-739930)     <serial type='pty'>
	I1204 20:08:12.478622   27912 main.go:141] libmachine: (ha-739930)       <target port='0'/>
	I1204 20:08:12.478628   27912 main.go:141] libmachine: (ha-739930)     </serial>
	I1204 20:08:12.478636   27912 main.go:141] libmachine: (ha-739930)     <console type='pty'>
	I1204 20:08:12.478641   27912 main.go:141] libmachine: (ha-739930)       <target type='serial' port='0'/>
	I1204 20:08:12.478650   27912 main.go:141] libmachine: (ha-739930)     </console>
	I1204 20:08:12.478654   27912 main.go:141] libmachine: (ha-739930)     <rng model='virtio'>
	I1204 20:08:12.478660   27912 main.go:141] libmachine: (ha-739930)       <backend model='random'>/dev/random</backend>
	I1204 20:08:12.478666   27912 main.go:141] libmachine: (ha-739930)     </rng>
	I1204 20:08:12.478671   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478674   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478679   27912 main.go:141] libmachine: (ha-739930)   </devices>
	I1204 20:08:12.478685   27912 main.go:141] libmachine: (ha-739930) </domain>
	I1204 20:08:12.478691   27912 main.go:141] libmachine: (ha-739930) 
	I1204 20:08:12.482962   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:1f:34:29 in network default
	I1204 20:08:12.483451   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:12.483468   27912 main.go:141] libmachine: (ha-739930) Ensuring networks are active...
	I1204 20:08:12.484073   27912 main.go:141] libmachine: (ha-739930) Ensuring network default is active
	I1204 20:08:12.484443   27912 main.go:141] libmachine: (ha-739930) Ensuring network mk-ha-739930 is active
	I1204 20:08:12.485051   27912 main.go:141] libmachine: (ha-739930) Getting domain xml...
	I1204 20:08:12.485709   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:13.663232   27912 main.go:141] libmachine: (ha-739930) Waiting to get IP...
	I1204 20:08:13.663928   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.664244   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.664289   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.664239   27935 retry.go:31] will retry after 311.107761ms: waiting for machine to come up
	I1204 20:08:13.976518   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.976875   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.976897   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.976832   27935 retry.go:31] will retry after 302.848525ms: waiting for machine to come up
	I1204 20:08:14.281431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.281818   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.281846   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.281773   27935 retry.go:31] will retry after 460.768304ms: waiting for machine to come up
	I1204 20:08:14.744364   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.744813   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.744835   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.744754   27935 retry.go:31] will retry after 399.590847ms: waiting for machine to come up
	I1204 20:08:15.146387   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.146887   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.146911   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.146850   27935 retry.go:31] will retry after 733.547268ms: waiting for machine to come up
	I1204 20:08:15.882052   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.882481   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.882509   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.882450   27935 retry.go:31] will retry after 598.816129ms: waiting for machine to come up
	I1204 20:08:16.483323   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:16.483724   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:16.483766   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:16.483669   27935 retry.go:31] will retry after 816.886511ms: waiting for machine to come up
	I1204 20:08:17.302385   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:17.302850   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:17.303157   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:17.303086   27935 retry.go:31] will retry after 1.092347228s: waiting for machine to come up
	I1204 20:08:18.397513   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:18.397955   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:18.397979   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:18.397908   27935 retry.go:31] will retry after 1.349280463s: waiting for machine to come up
	I1204 20:08:19.748591   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:19.749086   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:19.749107   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:19.749051   27935 retry.go:31] will retry after 1.929176971s: waiting for machine to come up
	I1204 20:08:21.681322   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:21.681787   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:21.681821   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:21.681719   27935 retry.go:31] will retry after 2.034104658s: waiting for machine to come up
	I1204 20:08:23.717496   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:23.717880   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:23.717910   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:23.717836   27935 retry.go:31] will retry after 2.982891394s: waiting for machine to come up
	I1204 20:08:26.703937   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:26.704406   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:26.704442   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:26.704358   27935 retry.go:31] will retry after 2.968408416s: waiting for machine to come up
	I1204 20:08:29.675768   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:29.676304   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:29.676332   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:29.676260   27935 retry.go:31] will retry after 5.520024319s: waiting for machine to come up
	I1204 20:08:35.199569   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200041   27912 main.go:141] libmachine: (ha-739930) Found IP for machine: 192.168.39.183
	I1204 20:08:35.200065   27912 main.go:141] libmachine: (ha-739930) Reserving static IP address...
	I1204 20:08:35.200092   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has current primary IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200437   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find host DHCP lease matching {name: "ha-739930", mac: "52:54:00:b9:91:f7", ip: "192.168.39.183"} in network mk-ha-739930
	I1204 20:08:35.268817   27912 main.go:141] libmachine: (ha-739930) Reserved static IP address: 192.168.39.183
	I1204 20:08:35.268847   27912 main.go:141] libmachine: (ha-739930) Waiting for SSH to be available...
	I1204 20:08:35.268856   27912 main.go:141] libmachine: (ha-739930) DBG | Getting to WaitForSSH function...
	I1204 20:08:35.271480   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271869   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.271895   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271987   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH client type: external
	I1204 20:08:35.272004   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa (-rw-------)
	I1204 20:08:35.272069   27912 main.go:141] libmachine: (ha-739930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:08:35.272087   27912 main.go:141] libmachine: (ha-739930) DBG | About to run SSH command:
	I1204 20:08:35.272103   27912 main.go:141] libmachine: (ha-739930) DBG | exit 0
	I1204 20:08:35.395351   27912 main.go:141] libmachine: (ha-739930) DBG | SSH cmd err, output: <nil>: 
	I1204 20:08:35.395650   27912 main.go:141] libmachine: (ha-739930) KVM machine creation complete!
	I1204 20:08:35.395986   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:35.396534   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396731   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396857   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:08:35.396871   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:35.398039   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:08:35.398051   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:08:35.398055   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:08:35.398060   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.400170   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400525   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.400571   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400650   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.400812   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.400979   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.401117   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.401289   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.401492   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.401507   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:08:35.502303   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.502340   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:08:35.502352   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.504752   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505142   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.505165   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505360   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.505545   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505676   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505789   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.505915   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.506073   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.506082   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:08:35.608173   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:08:35.608233   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:08:35.608240   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:08:35.608247   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608464   27912 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:08:35.608480   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608679   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.611354   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611746   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.611772   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611904   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.612062   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612200   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612312   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.612460   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.612630   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.612642   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:08:35.730422   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:08:35.730456   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.732817   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733139   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.733168   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733310   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.733480   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733651   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733802   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.733983   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.734154   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.734171   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:08:35.843780   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.843821   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:08:35.843865   27912 buildroot.go:174] setting up certificates
	I1204 20:08:35.843880   27912 provision.go:84] configureAuth start
	I1204 20:08:35.843894   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.844232   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:35.847046   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847366   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.847411   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847570   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.849830   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850112   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.850131   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850320   27912 provision.go:143] copyHostCerts
	I1204 20:08:35.850348   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850382   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:08:35.850391   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850460   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:08:35.850567   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850595   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:08:35.850604   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850645   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:08:35.850723   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850741   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:08:35.850748   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850772   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:08:35.850823   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:08:35.983720   27912 provision.go:177] copyRemoteCerts
	I1204 20:08:35.983786   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:08:35.983810   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.986241   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986583   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.986614   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986772   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.986960   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.987093   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.987240   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.068879   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:08:36.068950   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1204 20:08:36.091202   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:08:36.091259   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:08:36.112918   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:08:36.112998   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:08:36.134856   27912 provision.go:87] duration metric: took 290.963844ms to configureAuth
	I1204 20:08:36.134887   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:08:36.135063   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:36.135153   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.137760   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138113   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.138138   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138342   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.138505   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138658   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138779   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.138924   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.139114   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.139131   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:08:36.346218   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:08:36.346255   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:08:36.346283   27912 main.go:141] libmachine: (ha-739930) Calling .GetURL
	I1204 20:08:36.347448   27912 main.go:141] libmachine: (ha-739930) DBG | Using libvirt version 6000000
	I1204 20:08:36.349418   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349723   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.349742   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349920   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:08:36.349936   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:08:36.349943   27912 client.go:171] duration metric: took 24.3195237s to LocalClient.Create
	I1204 20:08:36.349963   27912 start.go:167] duration metric: took 24.319574814s to libmachine.API.Create "ha-739930"
	I1204 20:08:36.349976   27912 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:08:36.349991   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:08:36.350013   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.350205   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:08:36.350228   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.351979   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352286   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.352313   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352437   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.352594   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.352706   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.352816   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.432460   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:08:36.436012   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:08:36.436028   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:08:36.436089   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:08:36.436188   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:08:36.436201   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:08:36.436304   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:08:36.444678   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:36.467397   27912 start.go:296] duration metric: took 117.407014ms for postStartSetup
	I1204 20:08:36.467437   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:36.467977   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.470186   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470558   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.470586   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470798   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:36.470974   27912 start.go:128] duration metric: took 24.458260215s to createHost
	I1204 20:08:36.470996   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.472973   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473263   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.473284   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473418   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.473574   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473716   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473887   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.474035   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.474202   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.474217   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:08:36.575008   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342916.551867748
	
	I1204 20:08:36.575023   27912 fix.go:216] guest clock: 1733342916.551867748
	I1204 20:08:36.575030   27912 fix.go:229] Guest: 2024-12-04 20:08:36.551867748 +0000 UTC Remote: 2024-12-04 20:08:36.470986638 +0000 UTC m=+24.568358011 (delta=80.88111ms)
	I1204 20:08:36.575056   27912 fix.go:200] guest clock delta is within tolerance: 80.88111ms
	I1204 20:08:36.575080   27912 start.go:83] releasing machines lock for "ha-739930", held for 24.56242194s
	I1204 20:08:36.575103   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.575310   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.577787   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578087   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.578125   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578233   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578645   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578807   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578883   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:08:36.578924   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.579001   27912 ssh_runner.go:195] Run: cat /version.json
	I1204 20:08:36.579018   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.581456   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581787   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.581809   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581864   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581930   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582100   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582239   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582276   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.582299   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.582396   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.582566   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582713   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582863   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582989   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.675618   27912 ssh_runner.go:195] Run: systemctl --version
	I1204 20:08:36.681185   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:08:36.833908   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:08:36.839964   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:08:36.840024   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:08:36.855758   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:08:36.855780   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:08:36.855848   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:08:36.870692   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:08:36.883541   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:08:36.883596   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:08:36.896118   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:08:36.908920   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:08:37.025056   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:08:37.187310   27912 docker.go:233] disabling docker service ...
	I1204 20:08:37.187365   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:08:37.200934   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:08:37.212871   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:08:37.332646   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:08:37.440309   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:08:37.453353   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:08:37.470970   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:08:37.471030   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.480927   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:08:37.481009   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.491149   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.500802   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.510374   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:08:37.520079   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.529955   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.545993   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.555622   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:08:37.564180   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:08:37.564228   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:08:37.576296   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:08:37.585144   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:37.693931   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:08:37.777449   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:08:37.777509   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:08:37.781553   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:08:37.781604   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:08:37.784811   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:08:37.822634   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:08:37.822702   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.848190   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.873431   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:08:37.874606   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:37.877259   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877590   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:37.877619   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877786   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:08:37.881175   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:37.892903   27912 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:08:37.892996   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:37.893068   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:37.926070   27912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 20:08:37.926123   27912 ssh_runner.go:195] Run: which lz4
	I1204 20:08:37.929507   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 20:08:37.929636   27912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:08:37.933391   27912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:08:37.933415   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 20:08:39.139354   27912 crio.go:462] duration metric: took 1.209791733s to copy over tarball
	I1204 20:08:39.139460   27912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:08:41.096167   27912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.956678939s)
	I1204 20:08:41.096191   27912 crio.go:469] duration metric: took 1.956790325s to extract the tarball
	I1204 20:08:41.096199   27912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:08:41.132019   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:41.174932   27912 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:08:41.174955   27912 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:08:41.174962   27912 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:08:41.175056   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:08:41.175118   27912 ssh_runner.go:195] Run: crio config
	I1204 20:08:41.217894   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:41.217917   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:41.217927   27912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:08:41.217952   27912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:08:41.218081   27912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:08:41.218111   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:08:41.218165   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:08:41.233083   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:08:41.233174   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:08:41.233229   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:08:41.242410   27912 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:08:41.242479   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:08:41.251172   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:08:41.266346   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:08:41.281669   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:08:41.296753   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 20:08:41.311501   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:08:41.314975   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:41.325862   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:41.458198   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:08:41.473798   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:08:41.473814   27912 certs.go:194] generating shared ca certs ...
	I1204 20:08:41.473829   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.473951   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:08:41.473998   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:08:41.474012   27912 certs.go:256] generating profile certs ...
	I1204 20:08:41.474071   27912 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:08:41.474104   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt with IP's: []
	I1204 20:08:41.679553   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt ...
	I1204 20:08:41.679577   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt: {Name:mk3cb32626a63b25e9bcb53dbf57982e8c59176a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679756   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key ...
	I1204 20:08:41.679770   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key: {Name:mk5952f9a719bbb3868bb675769b7b60346c6fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679866   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395
	I1204 20:08:41.679888   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1204 20:08:42.002083   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 ...
	I1204 20:08:42.002109   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395: {Name:mk5f9c87f1a9d17c216fb1ba76a871a4d200a2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002298   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 ...
	I1204 20:08:42.002314   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395: {Name:mkbc19c0135d212682268a777ef3380b2e19b0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002409   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:08:42.002519   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:08:42.002573   27912 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:08:42.002587   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt with IP's: []
	I1204 20:08:42.211018   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt ...
	I1204 20:08:42.211049   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt: {Name:mkf1a9add2f9343bc4f70a7fa70f135cc4d00f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211250   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key ...
	I1204 20:08:42.211265   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key: {Name:mkb8fc6229780db95a674383629b517d0cfa035d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211361   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:08:42.211400   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:08:42.211422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:08:42.211442   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:08:42.211459   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:08:42.211477   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:08:42.211491   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:08:42.211508   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:08:42.211575   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:08:42.211622   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:08:42.211635   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:08:42.211671   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:08:42.211703   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:08:42.211734   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:08:42.211789   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:42.211826   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.211847   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.211866   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.212397   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:08:42.248354   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:08:42.283210   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:08:42.315759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:08:42.337377   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:08:42.359236   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:08:42.380567   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:08:42.402068   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:08:42.423840   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:08:42.445088   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:08:42.466154   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:08:42.487261   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:08:42.502237   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:08:42.507399   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:08:42.517386   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521412   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521456   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.526682   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:08:42.536595   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:08:42.546422   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550778   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550834   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.556366   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:08:42.567110   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:08:42.577648   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581927   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581970   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.587418   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:08:42.598017   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:08:42.601905   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:08:42.601960   27912 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:42.602029   27912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:08:42.602067   27912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:08:42.638904   27912 cri.go:89] found id: ""
	I1204 20:08:42.638964   27912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:08:42.648459   27912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:08:42.657551   27912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:08:42.666519   27912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:08:42.666536   27912 kubeadm.go:157] found existing configuration files:
	
	I1204 20:08:42.666571   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:08:42.675036   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:08:42.675086   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:08:42.683928   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:08:42.692253   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:08:42.692304   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:08:42.701014   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.709166   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:08:42.709204   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.718070   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:08:42.726526   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:08:42.726584   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:08:42.735312   27912 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 20:08:42.947971   27912 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 20:08:54.006500   27912 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 20:08:54.006550   27912 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 20:08:54.006630   27912 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 20:08:54.006748   27912 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 20:08:54.006901   27912 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 20:08:54.006999   27912 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 20:08:54.008316   27912 out.go:235]   - Generating certificates and keys ...
	I1204 20:08:54.008397   27912 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 20:08:54.008459   27912 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 20:08:54.008548   27912 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 20:08:54.008635   27912 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 20:08:54.008695   27912 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 20:08:54.008737   27912 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 20:08:54.008784   27912 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 20:08:54.008879   27912 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.008924   27912 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 20:08:54.009023   27912 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.009133   27912 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 20:08:54.009245   27912 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 20:08:54.009321   27912 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 20:08:54.009403   27912 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 20:08:54.009487   27912 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 20:08:54.009570   27912 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 20:08:54.009644   27912 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 20:08:54.009733   27912 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 20:08:54.009810   27912 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 20:08:54.009903   27912 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 20:08:54.009962   27912 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 20:08:54.011358   27912 out.go:235]   - Booting up control plane ...
	I1204 20:08:54.011484   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 20:08:54.011569   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 20:08:54.011635   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 20:08:54.011728   27912 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 20:08:54.011808   27912 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 20:08:54.011842   27912 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 20:08:54.011948   27912 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 20:08:54.012038   27912 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 20:08:54.012094   27912 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001462808s
	I1204 20:08:54.012172   27912 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 20:08:54.012262   27912 kubeadm.go:310] [api-check] The API server is healthy after 6.02019816s
	I1204 20:08:54.012392   27912 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 20:08:54.012536   27912 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 20:08:54.012619   27912 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 20:08:54.012799   27912 kubeadm.go:310] [mark-control-plane] Marking the node ha-739930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 20:08:54.012886   27912 kubeadm.go:310] [bootstrap-token] Using token: borrl1.p9d68mzgpldkynyz
	I1204 20:08:54.013953   27912 out.go:235]   - Configuring RBAC rules ...
	I1204 20:08:54.014046   27912 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 20:08:54.014140   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 20:08:54.014307   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 20:08:54.014473   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 20:08:54.014571   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 20:08:54.014670   27912 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 20:08:54.014826   27912 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 20:08:54.014865   27912 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 20:08:54.014923   27912 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 20:08:54.014933   27912 kubeadm.go:310] 
	I1204 20:08:54.015010   27912 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 20:08:54.015019   27912 kubeadm.go:310] 
	I1204 20:08:54.015144   27912 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 20:08:54.015156   27912 kubeadm.go:310] 
	I1204 20:08:54.015195   27912 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 20:08:54.015270   27912 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 20:08:54.015320   27912 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 20:08:54.015326   27912 kubeadm.go:310] 
	I1204 20:08:54.015392   27912 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 20:08:54.015402   27912 kubeadm.go:310] 
	I1204 20:08:54.015442   27912 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 20:08:54.015451   27912 kubeadm.go:310] 
	I1204 20:08:54.015493   27912 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 20:08:54.015582   27912 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 20:08:54.015675   27912 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 20:08:54.015684   27912 kubeadm.go:310] 
	I1204 20:08:54.015786   27912 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 20:08:54.015895   27912 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 20:08:54.015905   27912 kubeadm.go:310] 
	I1204 20:08:54.016003   27912 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016093   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 20:08:54.016113   27912 kubeadm.go:310] 	--control-plane 
	I1204 20:08:54.016117   27912 kubeadm.go:310] 
	I1204 20:08:54.016205   27912 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 20:08:54.016217   27912 kubeadm.go:310] 
	I1204 20:08:54.016293   27912 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016397   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 20:08:54.016411   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:54.016416   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:54.017939   27912 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 20:08:54.019064   27912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 20:08:54.023950   27912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 20:08:54.023967   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 20:08:54.041186   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 20:08:54.359013   27912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 20:08:54.359083   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:54.359121   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930 minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=true
	I1204 20:08:54.395990   27912 ops.go:34] apiserver oom_adj: -16
	I1204 20:08:54.548524   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.049558   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.548661   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.048619   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.549070   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.048848   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.549554   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.048830   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.161390   27912 kubeadm.go:1113] duration metric: took 3.80235484s to wait for elevateKubeSystemPrivileges
	I1204 20:08:58.161423   27912 kubeadm.go:394] duration metric: took 15.559467425s to StartCluster
	I1204 20:08:58.161444   27912 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.161514   27912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.162310   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.162533   27912 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:58.162562   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:08:58.162544   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 20:08:58.162557   27912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 20:08:58.162652   27912 addons.go:69] Setting storage-provisioner=true in profile "ha-739930"
	I1204 20:08:58.162661   27912 addons.go:69] Setting default-storageclass=true in profile "ha-739930"
	I1204 20:08:58.162674   27912 addons.go:234] Setting addon storage-provisioner=true in "ha-739930"
	I1204 20:08:58.162693   27912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-739930"
	I1204 20:08:58.162706   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.162718   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:58.163133   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163137   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163158   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.163161   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.177830   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I1204 20:08:58.177986   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I1204 20:08:58.178299   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178427   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178779   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.178807   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.178981   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.179001   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.179143   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179321   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179506   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.179650   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.179676   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.181633   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.181895   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:08:58.182308   27912 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 20:08:58.182493   27912 addons.go:234] Setting addon default-storageclass=true in "ha-739930"
	I1204 20:08:58.182532   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.182790   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.182824   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.194517   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I1204 20:08:58.194972   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.195484   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.195512   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.195872   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.196070   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.197298   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1204 20:08:58.197610   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.197777   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.198114   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.198138   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.198429   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.198834   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.198862   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.199309   27912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:08:58.200430   27912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.200452   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 20:08:58.200469   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.203367   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203781   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.203808   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203943   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.204099   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.204233   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.204358   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.213101   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 20:08:58.213504   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.214031   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.214059   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.214380   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.214549   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.216016   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.216199   27912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.216211   27912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 20:08:58.216223   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.218960   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219280   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.219317   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219479   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.219661   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.219835   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.219997   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.277316   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 20:08:58.357820   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.374108   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.721001   27912 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 20:08:59.051895   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051921   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.051951   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051972   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052204   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052222   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052231   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052241   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052293   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052317   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052325   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052322   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.052332   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052462   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052473   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053776   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.053794   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.053805   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053870   27912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 20:08:59.053894   27912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 20:08:59.053992   27912 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 20:08:59.054003   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.054010   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.054014   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.064602   27912 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 20:08:59.065317   27912 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 20:08:59.065335   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.065347   27912 round_trippers.go:473]     Content-Type: application/json
	I1204 20:08:59.065354   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.065359   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.068638   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:08:59.068754   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.068772   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.068971   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.068989   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.069005   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.071139   27912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 20:08:59.072109   27912 addons.go:510] duration metric: took 909.550558ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 20:08:59.072142   27912 start.go:246] waiting for cluster config update ...
	I1204 20:08:59.072151   27912 start.go:255] writing updated cluster config ...
	I1204 20:08:59.073463   27912 out.go:201] 
	I1204 20:08:59.074725   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:59.074813   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.076300   27912 out.go:177] * Starting "ha-739930-m02" control-plane node in "ha-739930" cluster
	I1204 20:08:59.077339   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:59.077359   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:59.077447   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:59.077461   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:59.077541   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.077723   27912 start.go:360] acquireMachinesLock for ha-739930-m02: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:59.077776   27912 start.go:364] duration metric: took 30.982µs to acquireMachinesLock for "ha-739930-m02"
	I1204 20:08:59.077798   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:59.077880   27912 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 20:08:59.079261   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:59.079340   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:59.079368   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:59.093684   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1204 20:08:59.094078   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:59.094558   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:59.094579   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:59.094913   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:59.095089   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:08:59.095236   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:08:59.095406   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:59.095437   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:59.095465   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:59.095493   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095505   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095551   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:59.095568   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095579   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095595   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:59.095602   27912 main.go:141] libmachine: (ha-739930-m02) Calling .PreCreateCheck
	I1204 20:08:59.095756   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:08:59.096074   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:59.096086   27912 main.go:141] libmachine: (ha-739930-m02) Calling .Create
	I1204 20:08:59.096214   27912 main.go:141] libmachine: (ha-739930-m02) Creating KVM machine...
	I1204 20:08:59.097249   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing default KVM network
	I1204 20:08:59.097426   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing private KVM network mk-ha-739930
	I1204 20:08:59.097515   27912 main.go:141] libmachine: (ha-739930-m02) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.097549   27912 main.go:141] libmachine: (ha-739930-m02) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:59.097603   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.097507   28291 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.097713   27912 main.go:141] libmachine: (ha-739930-m02) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:59.334730   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.334621   28291 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa...
	I1204 20:08:59.653553   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653411   28291 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk...
	I1204 20:08:59.653587   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing magic tar header
	I1204 20:08:59.653647   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing SSH key tar header
	I1204 20:08:59.653678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653561   28291 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.653704   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 (perms=drwx------)
	I1204 20:08:59.653726   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02
	I1204 20:08:59.653737   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:59.653758   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:59.653773   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:59.653785   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.653796   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:59.653813   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:59.653825   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:59.653838   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:59.653850   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:08:59.653865   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:59.653875   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:59.653889   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home
	I1204 20:08:59.653903   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Skipping /home - not owner
	I1204 20:08:59.654725   27912 main.go:141] libmachine: (ha-739930-m02) define libvirt domain using xml: 
	I1204 20:08:59.654740   27912 main.go:141] libmachine: (ha-739930-m02) <domain type='kvm'>
	I1204 20:08:59.654751   27912 main.go:141] libmachine: (ha-739930-m02)   <name>ha-739930-m02</name>
	I1204 20:08:59.654763   27912 main.go:141] libmachine: (ha-739930-m02)   <memory unit='MiB'>2200</memory>
	I1204 20:08:59.654775   27912 main.go:141] libmachine: (ha-739930-m02)   <vcpu>2</vcpu>
	I1204 20:08:59.654788   27912 main.go:141] libmachine: (ha-739930-m02)   <features>
	I1204 20:08:59.654796   27912 main.go:141] libmachine: (ha-739930-m02)     <acpi/>
	I1204 20:08:59.654806   27912 main.go:141] libmachine: (ha-739930-m02)     <apic/>
	I1204 20:08:59.654818   27912 main.go:141] libmachine: (ha-739930-m02)     <pae/>
	I1204 20:08:59.654837   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.654847   27912 main.go:141] libmachine: (ha-739930-m02)   </features>
	I1204 20:08:59.654851   27912 main.go:141] libmachine: (ha-739930-m02)   <cpu mode='host-passthrough'>
	I1204 20:08:59.654858   27912 main.go:141] libmachine: (ha-739930-m02)   
	I1204 20:08:59.654862   27912 main.go:141] libmachine: (ha-739930-m02)   </cpu>
	I1204 20:08:59.654870   27912 main.go:141] libmachine: (ha-739930-m02)   <os>
	I1204 20:08:59.654874   27912 main.go:141] libmachine: (ha-739930-m02)     <type>hvm</type>
	I1204 20:08:59.654882   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='cdrom'/>
	I1204 20:08:59.654892   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='hd'/>
	I1204 20:08:59.654905   27912 main.go:141] libmachine: (ha-739930-m02)     <bootmenu enable='no'/>
	I1204 20:08:59.654916   27912 main.go:141] libmachine: (ha-739930-m02)   </os>
	I1204 20:08:59.654941   27912 main.go:141] libmachine: (ha-739930-m02)   <devices>
	I1204 20:08:59.654966   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='cdrom'>
	I1204 20:08:59.654982   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/boot2docker.iso'/>
	I1204 20:08:59.654997   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:59.655013   27912 main.go:141] libmachine: (ha-739930-m02)       <readonly/>
	I1204 20:08:59.655023   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655035   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='disk'>
	I1204 20:08:59.655049   27912 main.go:141] libmachine: (ha-739930-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:59.655067   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk'/>
	I1204 20:08:59.655083   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hda' bus='virtio'/>
	I1204 20:08:59.655095   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655104   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655117   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='mk-ha-739930'/>
	I1204 20:08:59.655129   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655141   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655157   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655176   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='default'/>
	I1204 20:08:59.655187   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655199   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655208   27912 main.go:141] libmachine: (ha-739930-m02)     <serial type='pty'>
	I1204 20:08:59.655231   27912 main.go:141] libmachine: (ha-739930-m02)       <target port='0'/>
	I1204 20:08:59.655250   27912 main.go:141] libmachine: (ha-739930-m02)     </serial>
	I1204 20:08:59.655268   27912 main.go:141] libmachine: (ha-739930-m02)     <console type='pty'>
	I1204 20:08:59.655284   27912 main.go:141] libmachine: (ha-739930-m02)       <target type='serial' port='0'/>
	I1204 20:08:59.655295   27912 main.go:141] libmachine: (ha-739930-m02)     </console>
	I1204 20:08:59.655302   27912 main.go:141] libmachine: (ha-739930-m02)     <rng model='virtio'>
	I1204 20:08:59.655315   27912 main.go:141] libmachine: (ha-739930-m02)       <backend model='random'>/dev/random</backend>
	I1204 20:08:59.655321   27912 main.go:141] libmachine: (ha-739930-m02)     </rng>
	I1204 20:08:59.655329   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655333   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655340   27912 main.go:141] libmachine: (ha-739930-m02)   </devices>
	I1204 20:08:59.655345   27912 main.go:141] libmachine: (ha-739930-m02) </domain>
	I1204 20:08:59.655362   27912 main.go:141] libmachine: (ha-739930-m02) 
	I1204 20:08:59.661230   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:69:55:bb in network default
	I1204 20:08:59.661784   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:08:59.661806   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring networks are active...
	I1204 20:08:59.662333   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network default is active
	I1204 20:08:59.662568   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network mk-ha-739930 is active
	I1204 20:08:59.662825   27912 main.go:141] libmachine: (ha-739930-m02) Getting domain xml...
	I1204 20:08:59.663438   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:09:00.864454   27912 main.go:141] libmachine: (ha-739930-m02) Waiting to get IP...
	I1204 20:09:00.865262   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:00.865678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:00.865706   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:00.865644   28291 retry.go:31] will retry after 202.440812ms: waiting for machine to come up
	I1204 20:09:01.070038   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.070521   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.070539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.070483   28291 retry.go:31] will retry after 379.96661ms: waiting for machine to come up
	I1204 20:09:01.452279   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.452670   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.452703   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.452620   28291 retry.go:31] will retry after 448.23669ms: waiting for machine to come up
	I1204 20:09:01.902848   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.903274   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.903301   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.903230   28291 retry.go:31] will retry after 590.399252ms: waiting for machine to come up
	I1204 20:09:02.495129   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:02.495572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:02.495602   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:02.495522   28291 retry.go:31] will retry after 535.882434ms: waiting for machine to come up
	I1204 20:09:03.033125   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.033552   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.033572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.033531   28291 retry.go:31] will retry after 698.598885ms: waiting for machine to come up
	I1204 20:09:03.733894   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.734321   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.734351   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.734276   28291 retry.go:31] will retry after 1.177854854s: waiting for machine to come up
	I1204 20:09:04.914541   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:04.914975   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:04.915005   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:04.914934   28291 retry.go:31] will retry after 1.093246259s: waiting for machine to come up
	I1204 20:09:06.010091   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:06.010517   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:06.010543   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:06.010478   28291 retry.go:31] will retry after 1.613080477s: waiting for machine to come up
	I1204 20:09:07.624874   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:07.625335   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:07.625364   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:07.625313   28291 retry.go:31] will retry after 2.249296346s: waiting for machine to come up
	I1204 20:09:09.875662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:09.876187   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:09.876218   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:09.876124   28291 retry.go:31] will retry after 2.42642151s: waiting for machine to come up
	I1204 20:09:12.305633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:12.306060   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:12.306085   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:12.306030   28291 retry.go:31] will retry after 2.221078432s: waiting for machine to come up
	I1204 20:09:14.529048   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:14.529558   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:14.529585   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:14.529522   28291 retry.go:31] will retry after 2.966790247s: waiting for machine to come up
	I1204 20:09:17.499601   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:17.500108   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:17.500137   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:17.500054   28291 retry.go:31] will retry after 4.394406199s: waiting for machine to come up
	I1204 20:09:21.898072   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898515   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898531   27912 main.go:141] libmachine: (ha-739930-m02) Found IP for machine: 192.168.39.216
	I1204 20:09:21.898543   27912 main.go:141] libmachine: (ha-739930-m02) Reserving static IP address...
	I1204 20:09:21.899016   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find host DHCP lease matching {name: "ha-739930-m02", mac: "52:54:00:91:b2:c1", ip: "192.168.39.216"} in network mk-ha-739930
	I1204 20:09:21.970499   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Getting to WaitForSSH function...
	I1204 20:09:21.970531   27912 main.go:141] libmachine: (ha-739930-m02) Reserved static IP address: 192.168.39.216
	I1204 20:09:21.970544   27912 main.go:141] libmachine: (ha-739930-m02) Waiting for SSH to be available...
	I1204 20:09:21.972885   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973270   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:21.973299   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973444   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH client type: external
	I1204 20:09:21.973472   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa (-rw-------)
	I1204 20:09:21.973507   27912 main.go:141] libmachine: (ha-739930-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:09:21.973526   27912 main.go:141] libmachine: (ha-739930-m02) DBG | About to run SSH command:
	I1204 20:09:21.973534   27912 main.go:141] libmachine: (ha-739930-m02) DBG | exit 0
	I1204 20:09:22.099805   27912 main.go:141] libmachine: (ha-739930-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 20:09:22.100058   27912 main.go:141] libmachine: (ha-739930-m02) KVM machine creation complete!
	I1204 20:09:22.100415   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:22.101293   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101487   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101644   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:09:22.101669   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetState
	I1204 20:09:22.102974   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:09:22.102992   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:09:22.103000   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:09:22.103008   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.105264   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105562   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.105595   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105759   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.105924   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106031   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106146   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.106307   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.106556   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.106582   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:09:22.210652   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.210674   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:09:22.210689   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.213316   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.213662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.213923   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214102   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214252   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.214405   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.214561   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.214571   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:09:22.320078   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:09:22.320145   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:09:22.320155   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:09:22.320176   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320420   27912 buildroot.go:166] provisioning hostname "ha-739930-m02"
	I1204 20:09:22.320451   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320599   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.322962   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323306   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.323331   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323525   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.323704   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323837   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323937   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.324095   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.324248   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.324260   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m02 && echo "ha-739930-m02" | sudo tee /etc/hostname
	I1204 20:09:22.442684   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m02
	
	I1204 20:09:22.442712   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.445503   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.445841   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.445866   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.446028   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.446227   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446390   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446547   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.446707   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.446886   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.446908   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:09:22.560132   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.560177   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:09:22.560210   27912 buildroot.go:174] setting up certificates
	I1204 20:09:22.560227   27912 provision.go:84] configureAuth start
	I1204 20:09:22.560246   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.560519   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:22.563054   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563443   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.563470   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563600   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.565613   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.565936   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.565961   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.566074   27912 provision.go:143] copyHostCerts
	I1204 20:09:22.566103   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566138   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:09:22.566151   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566226   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:09:22.566301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566318   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:09:22.566325   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566349   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:09:22.566391   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566409   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:09:22.566415   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566442   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:09:22.566488   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m02 san=[127.0.0.1 192.168.39.216 ha-739930-m02 localhost minikube]
	I1204 20:09:22.637792   27912 provision.go:177] copyRemoteCerts
	I1204 20:09:22.637844   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:09:22.637865   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.640451   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.640844   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.640870   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.641017   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.641198   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.641358   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.641490   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:22.721358   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:09:22.721454   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:09:22.745038   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:09:22.745117   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:09:22.767198   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:09:22.767272   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:09:22.788710   27912 provision.go:87] duration metric: took 228.465669ms to configureAuth
	I1204 20:09:22.788740   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:09:22.788919   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:22.788987   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.791733   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792076   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.792099   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792317   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.792506   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792661   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.792909   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.793086   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.793106   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:09:23.010014   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:09:23.010040   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:09:23.010051   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetURL
	I1204 20:09:23.011214   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using libvirt version 6000000
	I1204 20:09:23.013200   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.013554   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013737   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:09:23.013756   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:09:23.013764   27912 client.go:171] duration metric: took 23.918317311s to LocalClient.Create
	I1204 20:09:23.013791   27912 start.go:167] duration metric: took 23.918385611s to libmachine.API.Create "ha-739930"
	I1204 20:09:23.013802   27912 start.go:293] postStartSetup for "ha-739930-m02" (driver="kvm2")
	I1204 20:09:23.013810   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:09:23.013826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.014037   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:09:23.014061   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.016336   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016674   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.016696   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.017001   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.017147   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.017302   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.098690   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:09:23.102672   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:09:23.102692   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:09:23.102751   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:09:23.102837   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:09:23.102850   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:09:23.102957   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:09:23.113316   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:23.137226   27912 start.go:296] duration metric: took 123.412538ms for postStartSetup
	I1204 20:09:23.137272   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:23.137827   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.140225   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140510   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.140539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140708   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:09:23.140912   27912 start.go:128] duration metric: took 24.063021139s to createHost
	I1204 20:09:23.140935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.143463   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143769   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.143788   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.144107   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144264   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144405   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.144585   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:23.144731   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:23.144740   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:09:23.251984   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342963.229753214
	
	I1204 20:09:23.252009   27912 fix.go:216] guest clock: 1733342963.229753214
	I1204 20:09:23.252019   27912 fix.go:229] Guest: 2024-12-04 20:09:23.229753214 +0000 UTC Remote: 2024-12-04 20:09:23.140925676 +0000 UTC m=+71.238297049 (delta=88.827538ms)
	I1204 20:09:23.252039   27912 fix.go:200] guest clock delta is within tolerance: 88.827538ms
	I1204 20:09:23.252046   27912 start.go:83] releasing machines lock for "ha-739930-m02", held for 24.174259167s
	I1204 20:09:23.252070   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.252303   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.254849   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.255234   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.255263   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.257539   27912 out.go:177] * Found network options:
	I1204 20:09:23.258745   27912 out.go:177]   - NO_PROXY=192.168.39.183
	W1204 20:09:23.259924   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.259962   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260454   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260610   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260694   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:09:23.260738   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	W1204 20:09:23.260771   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.260841   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:09:23.260863   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.263151   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263477   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.263505   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263671   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.263841   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.263988   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.263998   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.264025   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.264114   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.264181   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.264329   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.264459   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.264614   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.488607   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:09:23.493980   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:09:23.494034   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:09:23.509548   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:09:23.509575   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:09:23.509645   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:09:23.525800   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:09:23.539440   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:09:23.539502   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:09:23.552521   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:09:23.565606   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:09:23.684851   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:09:23.845149   27912 docker.go:233] disabling docker service ...
	I1204 20:09:23.845231   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:09:23.859120   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:09:23.871561   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:09:23.987397   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:09:24.126711   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:09:24.141506   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:09:24.159151   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:09:24.159228   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.170226   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:09:24.170291   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.182530   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.192731   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.202617   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:09:24.213736   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.224231   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.240767   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.251003   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:09:24.260142   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:09:24.260204   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:09:24.272434   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:09:24.282354   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:24.398398   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:09:24.487789   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:09:24.487861   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:09:24.492488   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:09:24.492560   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:09:24.496257   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:09:24.535274   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:09:24.535361   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.562604   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.590689   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:09:24.591986   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:09:24.593151   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:24.595599   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.595887   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:24.595916   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.596077   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:09:24.600001   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:24.611463   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:09:24.611643   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:24.611877   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.611903   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.627049   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1204 20:09:24.627459   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.627903   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.627928   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.628257   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.628473   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:09:24.629895   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:24.630233   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.630265   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.644758   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I1204 20:09:24.645209   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.645667   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.645685   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.645969   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.646125   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:24.646291   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.216
	I1204 20:09:24.646303   27912 certs.go:194] generating shared ca certs ...
	I1204 20:09:24.646316   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.646428   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:09:24.646465   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:09:24.646474   27912 certs.go:256] generating profile certs ...
	I1204 20:09:24.646544   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:09:24.646568   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e
	I1204 20:09:24.646583   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.254]
	I1204 20:09:24.766401   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e ...
	I1204 20:09:24.766431   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e: {Name:mkc714ddc3cd4c136e7a763dd7561d567af3f099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766597   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e ...
	I1204 20:09:24.766610   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e: {Name:mk0a2c7e9c0190313579e96374b5ec6b927ba043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766678   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:09:24.766802   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:09:24.766921   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:09:24.766936   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:09:24.766949   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:09:24.766968   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:09:24.766979   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:09:24.766989   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:09:24.767002   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:09:24.767010   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:09:24.767022   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:09:24.767067   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:09:24.767093   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:09:24.767102   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:09:24.767122   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:09:24.767144   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:09:24.767164   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:09:24.767200   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:24.767225   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:24.767238   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:09:24.767250   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:09:24.767278   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:24.770180   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770542   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:24.770570   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770712   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:24.770891   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:24.771044   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:24.771172   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:24.847687   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:09:24.853685   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:09:24.865057   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:09:24.869198   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:09:24.885878   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:09:24.889805   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:09:24.902654   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:09:24.906786   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:09:24.918187   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:09:24.922192   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:09:24.934730   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:09:24.938712   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:09:24.950279   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:09:24.974079   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:09:24.996598   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:09:25.018605   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:09:25.040436   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 20:09:25.062496   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:09:25.083915   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:09:25.105243   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:09:25.126515   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:09:25.148104   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:09:25.169580   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:09:25.190929   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:09:25.206338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:09:25.221317   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:09:25.236210   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:09:25.251125   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:09:25.266383   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:09:25.281338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:09:25.296542   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:09:25.302513   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:09:25.313596   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317903   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317952   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.323324   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:09:25.334576   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:09:25.344350   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348476   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348531   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.353851   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:09:25.364310   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:09:25.375701   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379775   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379825   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.385241   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:09:25.395365   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:09:25.399560   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:09:25.399615   27912 kubeadm.go:934] updating node {m02 192.168.39.216 8443 v1.31.2 crio true true} ...
	I1204 20:09:25.399711   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:09:25.399742   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:09:25.399777   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:09:25.415868   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:09:25.415924   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:09:25.415967   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.424465   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:09:25.424517   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.433122   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:09:25.433145   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433195   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433218   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 20:09:25.433242   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 20:09:25.437081   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:09:25.437107   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:09:26.186226   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.186313   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.190746   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:09:26.190822   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:09:26.419618   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:09:26.443488   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.443611   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.450947   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:09:26.450982   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:09:26.739349   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:09:26.748265   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:09:26.764007   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:09:26.780904   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:09:26.797527   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:09:26.801091   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:26.811509   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:26.923723   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:26.939490   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:26.939813   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:26.939861   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:26.954842   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I1204 20:09:26.955355   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:26.955871   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:26.955897   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:26.956236   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:26.956453   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:26.956610   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:09:26.956705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:09:26.956726   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:26.959547   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.959914   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:26.959939   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.960071   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:26.960221   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:26.960358   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:26.960492   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:27.110244   27912 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:27.110295   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I1204 20:09:48.018604   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (20.908287309s)
	I1204 20:09:48.018634   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:09:48.626365   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m02 minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:09:48.747614   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:09:48.847766   27912 start.go:319] duration metric: took 21.891152638s to joinCluster
	I1204 20:09:48.847828   27912 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:48.848176   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:48.849095   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:09:48.850328   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:49.112006   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:49.157177   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:09:49.157538   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:09:49.157630   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:09:49.157883   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:09:49.158009   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.158021   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.158035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.158045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.168058   27912 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1204 20:09:49.658898   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.658922   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.658932   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.658943   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.667464   27912 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 20:09:50.158380   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.158399   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.158413   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.158419   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.171364   27912 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1204 20:09:50.658199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.658226   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.658233   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.658237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.663401   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:09:51.159112   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.159137   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.159148   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.159156   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.162480   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:51.163075   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:51.658265   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.658294   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.658304   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.658310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.661298   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:52.158591   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.158614   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.158623   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.158627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.161933   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:52.658479   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.658500   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.658508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.658513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.661537   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.158361   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.158384   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.158394   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.158402   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.161578   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.658404   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.658425   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.658433   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.658437   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.661364   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:53.662003   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:54.158610   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.158635   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.158645   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.158651   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.162217   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:54.658074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.658094   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.658102   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.658106   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.661918   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.158589   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.158611   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.158619   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.158624   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.161786   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.658906   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.658929   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.658937   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.658941   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.662357   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.663184   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:56.158490   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.158517   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.158528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.158533   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.258326   27912 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I1204 20:09:56.658232   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.658254   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.658264   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.658270   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.661245   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:57.158358   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.158380   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.158388   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.158392   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.162043   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:57.658188   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.658212   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.658223   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.658232   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.661717   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.158679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.158701   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.158708   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.158713   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.162634   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.163161   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:58.658856   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.658882   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.658900   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.658907   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.662596   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.158835   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.158862   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.158873   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.158880   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.162669   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.658183   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.658215   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.658226   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.658231   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.661879   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.158851   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.158875   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.158883   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.158888   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.162790   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.163321   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:00.658562   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.658590   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.658601   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.658607   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.676721   27912 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 20:10:01.159007   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.159027   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.159035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.159038   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.162909   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:01.658124   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.658161   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.658184   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.658188   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.662301   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:02.158692   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.158716   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.158727   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.158732   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.162067   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:02.659042   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.659064   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.659071   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.659075   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.661911   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:02.662581   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:03.159115   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.159145   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.159158   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.159165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.162607   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:03.658246   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.658270   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.658278   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.658282   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.661511   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.158942   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.158970   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.158979   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.158983   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.161958   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:04.658955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.658979   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.658987   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.658991   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.662295   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.662958   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:05.158173   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.158194   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.158203   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.158207   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.161194   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:05.658134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.658157   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.658165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.658168   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.661616   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:06.158855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.158879   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.158887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.158891   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.164708   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:06.658461   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.658483   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.658491   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.658496   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.661810   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.158647   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.158674   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.158686   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.158690   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.161793   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.162345   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:07.658727   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.658752   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.658760   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.658764   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.661982   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.158999   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.159025   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.159037   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.159043   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.162388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.162849   27912 node_ready.go:49] node "ha-739930-m02" has status "Ready":"True"
	I1204 20:10:08.162868   27912 node_ready.go:38] duration metric: took 19.004941155s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:10:08.162878   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:08.162968   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:08.162977   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.162984   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.162987   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.167331   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:08.173856   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.173935   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:10:08.173944   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.173953   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.173958   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.176715   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.177374   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.177387   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.177395   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.177400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.179818   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.180446   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.180466   27912 pod_ready.go:82] duration metric: took 6.589083ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180478   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180546   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:10:08.180556   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.180569   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.180577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.183177   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.183821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.183836   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.183842   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.183847   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.186093   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.186600   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.186617   27912 pod_ready.go:82] duration metric: took 6.131706ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186628   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186691   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:10:08.186703   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.186713   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.186721   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.188940   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.189382   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.189398   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.189414   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.189420   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191367   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.191803   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.191818   27912 pod_ready.go:82] duration metric: took 5.18298ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191825   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191870   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:10:08.191877   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.191884   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.193844   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.194287   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.194299   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.194306   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.194310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.196400   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.196781   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.196797   27912 pod_ready.go:82] duration metric: took 4.966669ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.196810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.359125   27912 request.go:632] Waited for 162.263796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359211   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359219   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.359230   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.359237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.362569   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.559438   27912 request.go:632] Waited for 196.306856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559514   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559519   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.559526   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.559534   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.562128   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.562664   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.562679   27912 pod_ready.go:82] duration metric: took 365.86397ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.562689   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.759755   27912 request.go:632] Waited for 197.00165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759826   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.759834   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.759837   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.763106   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.959132   27912 request.go:632] Waited for 195.283542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959204   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.959212   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.959216   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.962369   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.962948   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.962965   27912 pod_ready.go:82] duration metric: took 400.270135ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.962974   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.159437   27912 request.go:632] Waited for 196.391636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159487   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159492   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.159502   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.159507   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.162708   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.359960   27912 request.go:632] Waited for 196.36752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360010   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360014   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.360022   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.360026   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.362729   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:09.363473   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.363492   27912 pod_ready.go:82] duration metric: took 400.512945ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.363502   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.559607   27912 request.go:632] Waited for 196.045629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559663   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559668   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.559676   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.559683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.563302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.759860   27912 request.go:632] Waited for 195.862174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759930   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759935   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.759943   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.759949   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.762988   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.763689   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.763715   27912 pod_ready.go:82] duration metric: took 400.20496ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.763729   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.959738   27912 request.go:632] Waited for 195.93307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959807   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959812   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.959819   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.959824   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.963156   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.159198   27912 request.go:632] Waited for 195.305905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159270   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159275   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.159283   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.159286   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.162529   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.163056   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.163074   27912 pod_ready.go:82] duration metric: took 399.337655ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.163084   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.359093   27912 request.go:632] Waited for 195.949947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359150   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359172   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.359182   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.359192   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.362392   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.559558   27912 request.go:632] Waited for 196.399776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559639   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559653   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.559664   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.559670   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.564370   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:10.564877   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.564896   27912 pod_ready.go:82] duration metric: took 401.805669ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.564906   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.759943   27912 request.go:632] Waited for 194.973279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760013   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.760021   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.760027   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.763726   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.959656   27912 request.go:632] Waited for 195.375986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959714   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959719   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.959726   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.959731   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.963524   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.964360   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.964375   27912 pod_ready.go:82] duration metric: took 399.464088ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.964389   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.159456   27912 request.go:632] Waited for 194.987845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159527   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159532   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.159539   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.159543   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.163395   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.359362   27912 request.go:632] Waited for 195.347282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359439   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359446   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.359458   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.359467   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.362635   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.363122   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:11.363138   27912 pod_ready.go:82] duration metric: took 398.74121ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.363148   27912 pod_ready.go:39] duration metric: took 3.200239096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:11.363164   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:10:11.363207   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:10:11.377015   27912 api_server.go:72] duration metric: took 22.529160197s to wait for apiserver process to appear ...
	I1204 20:10:11.377034   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:10:11.377052   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:10:11.380929   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:10:11.380976   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:10:11.380983   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.380999   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.381003   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.381838   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:10:11.381917   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:10:11.381931   27912 api_server.go:131] duration metric: took 4.890825ms to wait for apiserver health ...
	I1204 20:10:11.381937   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:10:11.559327   27912 request.go:632] Waited for 177.330525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559495   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.559519   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.559528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.566679   27912 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 20:10:11.572558   27912 system_pods.go:59] 17 kube-system pods found
	I1204 20:10:11.572586   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.572592   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.572597   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.572600   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.572604   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.572607   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.572612   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.572617   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.572623   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.572628   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.572635   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.572641   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.572646   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.572651   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.572655   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.572658   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.572661   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.572670   27912 system_pods.go:74] duration metric: took 190.727819ms to wait for pod list to return data ...
	I1204 20:10:11.572678   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:10:11.759027   27912 request.go:632] Waited for 186.27116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759095   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759100   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.759108   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.759113   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.763664   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:11.763867   27912 default_sa.go:45] found service account: "default"
	I1204 20:10:11.763882   27912 default_sa.go:55] duration metric: took 191.195892ms for default service account to be created ...
	I1204 20:10:11.763890   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:10:11.959431   27912 request.go:632] Waited for 195.47766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959540   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959553   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.959560   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.959566   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.965051   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:11.970022   27912 system_pods.go:86] 17 kube-system pods found
	I1204 20:10:11.970046   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.970051   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.970055   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.970059   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.970067   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.970071   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.970074   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.970078   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.970082   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.970088   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.970091   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.970095   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.970098   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.970100   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.970103   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.970106   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.970114   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.970124   27912 system_pods.go:126] duration metric: took 206.228874ms to wait for k8s-apps to be running ...
	I1204 20:10:11.970130   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:10:11.970170   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:11.984252   27912 system_svc.go:56] duration metric: took 14.113655ms WaitForService to wait for kubelet
	I1204 20:10:11.984285   27912 kubeadm.go:582] duration metric: took 23.13642897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:10:11.984305   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:10:12.159992   27912 request.go:632] Waited for 175.622844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160081   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:12.160088   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:12.160092   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:12.163352   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:12.164036   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164057   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164070   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164075   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164081   27912 node_conditions.go:105] duration metric: took 179.770433ms to run NodePressure ...
	I1204 20:10:12.164096   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:10:12.164129   27912 start.go:255] writing updated cluster config ...
	I1204 20:10:12.166221   27912 out.go:201] 
	I1204 20:10:12.167682   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:12.167793   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.169433   27912 out.go:177] * Starting "ha-739930-m03" control-plane node in "ha-739930" cluster
	I1204 20:10:12.170619   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:10:12.170641   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:10:12.170743   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:10:12.170758   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:10:12.170867   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.171047   27912 start.go:360] acquireMachinesLock for ha-739930-m03: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:10:12.171095   27912 start.go:364] duration metric: took 28.989µs to acquireMachinesLock for "ha-739930-m03"
	I1204 20:10:12.171119   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:12.171232   27912 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 20:10:12.172689   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:10:12.172776   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:12.172819   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:12.188562   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1204 20:10:12.189008   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:12.189520   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:12.189541   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:12.189894   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:12.190074   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:12.190188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:12.190394   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:10:12.190426   27912 client.go:168] LocalClient.Create starting
	I1204 20:10:12.190471   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:10:12.190508   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190530   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190598   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:10:12.190629   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190652   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190679   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:10:12.190691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .PreCreateCheck
	I1204 20:10:12.190909   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:12.191309   27912 main.go:141] libmachine: Creating machine...
	I1204 20:10:12.191322   27912 main.go:141] libmachine: (ha-739930-m03) Calling .Create
	I1204 20:10:12.191476   27912 main.go:141] libmachine: (ha-739930-m03) Creating KVM machine...
	I1204 20:10:12.192652   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing default KVM network
	I1204 20:10:12.192779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing private KVM network mk-ha-739930
	I1204 20:10:12.192908   27912 main.go:141] libmachine: (ha-739930-m03) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.192934   27912 main.go:141] libmachine: (ha-739930-m03) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:10:12.192988   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.192887   28697 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.193089   27912 main.go:141] libmachine: (ha-739930-m03) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:10:12.422847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.422708   28697 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa...
	I1204 20:10:12.571024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.570898   28697 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk...
	I1204 20:10:12.571065   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing magic tar header
	I1204 20:10:12.571083   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing SSH key tar header
	I1204 20:10:12.571096   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.571045   28697 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.571246   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03
	I1204 20:10:12.571291   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 (perms=drwx------)
	I1204 20:10:12.571302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:10:12.571314   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.571323   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:10:12.571331   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:10:12.571339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:10:12.571346   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home
	I1204 20:10:12.571354   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Skipping /home - not owner
	I1204 20:10:12.571391   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:10:12.571415   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:10:12.571432   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:10:12.571447   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:10:12.571458   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:10:12.571477   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:12.572409   27912 main.go:141] libmachine: (ha-739930-m03) define libvirt domain using xml: 
	I1204 20:10:12.572438   27912 main.go:141] libmachine: (ha-739930-m03) <domain type='kvm'>
	I1204 20:10:12.572449   27912 main.go:141] libmachine: (ha-739930-m03)   <name>ha-739930-m03</name>
	I1204 20:10:12.572461   27912 main.go:141] libmachine: (ha-739930-m03)   <memory unit='MiB'>2200</memory>
	I1204 20:10:12.572474   27912 main.go:141] libmachine: (ha-739930-m03)   <vcpu>2</vcpu>
	I1204 20:10:12.572480   27912 main.go:141] libmachine: (ha-739930-m03)   <features>
	I1204 20:10:12.572490   27912 main.go:141] libmachine: (ha-739930-m03)     <acpi/>
	I1204 20:10:12.572496   27912 main.go:141] libmachine: (ha-739930-m03)     <apic/>
	I1204 20:10:12.572505   27912 main.go:141] libmachine: (ha-739930-m03)     <pae/>
	I1204 20:10:12.572511   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572522   27912 main.go:141] libmachine: (ha-739930-m03)   </features>
	I1204 20:10:12.572529   27912 main.go:141] libmachine: (ha-739930-m03)   <cpu mode='host-passthrough'>
	I1204 20:10:12.572539   27912 main.go:141] libmachine: (ha-739930-m03)   
	I1204 20:10:12.572549   27912 main.go:141] libmachine: (ha-739930-m03)   </cpu>
	I1204 20:10:12.572577   27912 main.go:141] libmachine: (ha-739930-m03)   <os>
	I1204 20:10:12.572599   27912 main.go:141] libmachine: (ha-739930-m03)     <type>hvm</type>
	I1204 20:10:12.572612   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='cdrom'/>
	I1204 20:10:12.572622   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='hd'/>
	I1204 20:10:12.572630   27912 main.go:141] libmachine: (ha-739930-m03)     <bootmenu enable='no'/>
	I1204 20:10:12.572640   27912 main.go:141] libmachine: (ha-739930-m03)   </os>
	I1204 20:10:12.572648   27912 main.go:141] libmachine: (ha-739930-m03)   <devices>
	I1204 20:10:12.572659   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='cdrom'>
	I1204 20:10:12.572673   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/boot2docker.iso'/>
	I1204 20:10:12.572688   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hdc' bus='scsi'/>
	I1204 20:10:12.572708   27912 main.go:141] libmachine: (ha-739930-m03)       <readonly/>
	I1204 20:10:12.572721   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572747   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='disk'>
	I1204 20:10:12.572758   27912 main.go:141] libmachine: (ha-739930-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:10:12.572766   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk'/>
	I1204 20:10:12.572780   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hda' bus='virtio'/>
	I1204 20:10:12.572788   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572792   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572798   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='mk-ha-739930'/>
	I1204 20:10:12.572802   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572807   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572814   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572819   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='default'/>
	I1204 20:10:12.572825   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572842   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572860   27912 main.go:141] libmachine: (ha-739930-m03)     <serial type='pty'>
	I1204 20:10:12.572872   27912 main.go:141] libmachine: (ha-739930-m03)       <target port='0'/>
	I1204 20:10:12.572883   27912 main.go:141] libmachine: (ha-739930-m03)     </serial>
	I1204 20:10:12.572904   27912 main.go:141] libmachine: (ha-739930-m03)     <console type='pty'>
	I1204 20:10:12.572914   27912 main.go:141] libmachine: (ha-739930-m03)       <target type='serial' port='0'/>
	I1204 20:10:12.572922   27912 main.go:141] libmachine: (ha-739930-m03)     </console>
	I1204 20:10:12.572932   27912 main.go:141] libmachine: (ha-739930-m03)     <rng model='virtio'>
	I1204 20:10:12.572945   27912 main.go:141] libmachine: (ha-739930-m03)       <backend model='random'>/dev/random</backend>
	I1204 20:10:12.572957   27912 main.go:141] libmachine: (ha-739930-m03)     </rng>
	I1204 20:10:12.572965   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572973   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572983   27912 main.go:141] libmachine: (ha-739930-m03)   </devices>
	I1204 20:10:12.572991   27912 main.go:141] libmachine: (ha-739930-m03) </domain>
	I1204 20:10:12.572996   27912 main.go:141] libmachine: (ha-739930-m03) 
	I1204 20:10:12.580033   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:71:b7:c8 in network default
	I1204 20:10:12.580713   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring networks are active...
	I1204 20:10:12.580737   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:12.581680   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network default is active
	I1204 20:10:12.582031   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network mk-ha-739930 is active
	I1204 20:10:12.582464   27912 main.go:141] libmachine: (ha-739930-m03) Getting domain xml...
	I1204 20:10:12.583287   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:13.809969   27912 main.go:141] libmachine: (ha-739930-m03) Waiting to get IP...
	I1204 20:10:13.810804   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:13.811158   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:13.811215   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:13.811149   28697 retry.go:31] will retry after 211.474142ms: waiting for machine to come up
	I1204 20:10:14.024550   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.024996   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.025024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.024958   28697 retry.go:31] will retry after 355.071975ms: waiting for machine to come up
	I1204 20:10:14.381391   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.381825   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.381857   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.381781   28697 retry.go:31] will retry after 319.974042ms: waiting for machine to come up
	I1204 20:10:14.703466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.703910   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.703951   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.703877   28697 retry.go:31] will retry after 609.562735ms: waiting for machine to come up
	I1204 20:10:15.314561   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.315069   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.315101   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.315013   28697 retry.go:31] will retry after 486.973077ms: waiting for machine to come up
	I1204 20:10:15.803653   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.804185   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.804213   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.804126   28697 retry.go:31] will retry after 675.766149ms: waiting for machine to come up
	I1204 20:10:16.481967   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:16.482459   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:16.482489   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:16.482406   28697 retry.go:31] will retry after 1.174103834s: waiting for machine to come up
	I1204 20:10:17.658189   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:17.658580   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:17.658608   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:17.658533   28697 retry.go:31] will retry after 1.454065165s: waiting for machine to come up
	I1204 20:10:19.114276   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:19.114810   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:19.114839   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:19.114726   28697 retry.go:31] will retry after 1.181631433s: waiting for machine to come up
	I1204 20:10:20.297423   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:20.297826   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:20.297856   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:20.297775   28697 retry.go:31] will retry after 1.797113318s: waiting for machine to come up
	I1204 20:10:22.096493   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:22.096936   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:22.096963   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:22.096891   28697 retry.go:31] will retry after 2.640330643s: waiting for machine to come up
	I1204 20:10:24.740014   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:24.740549   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:24.740589   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:24.740509   28697 retry.go:31] will retry after 3.427854139s: waiting for machine to come up
	I1204 20:10:28.170039   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:28.170450   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:28.170480   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:28.170413   28697 retry.go:31] will retry after 3.100818386s: waiting for machine to come up
	I1204 20:10:31.273778   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:31.274339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:31.274370   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:31.274261   28697 retry.go:31] will retry after 5.17411421s: waiting for machine to come up
	I1204 20:10:36.453055   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453514   27912 main.go:141] libmachine: (ha-739930-m03) Found IP for machine: 192.168.39.176
	I1204 20:10:36.453546   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has current primary IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453554   27912 main.go:141] libmachine: (ha-739930-m03) Reserving static IP address...
	I1204 20:10:36.453982   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find host DHCP lease matching {name: "ha-739930-m03", mac: "52:54:00:8f:55:42", ip: "192.168.39.176"} in network mk-ha-739930
	I1204 20:10:36.527779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Getting to WaitForSSH function...
	I1204 20:10:36.527812   27912 main.go:141] libmachine: (ha-739930-m03) Reserved static IP address: 192.168.39.176
	I1204 20:10:36.527825   27912 main.go:141] libmachine: (ha-739930-m03) Waiting for SSH to be available...
	I1204 20:10:36.530460   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.530890   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.530918   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.531105   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH client type: external
	I1204 20:10:36.531134   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa (-rw-------)
	I1204 20:10:36.531171   27912 main.go:141] libmachine: (ha-739930-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:10:36.531193   27912 main.go:141] libmachine: (ha-739930-m03) DBG | About to run SSH command:
	I1204 20:10:36.531210   27912 main.go:141] libmachine: (ha-739930-m03) DBG | exit 0
	I1204 20:10:36.659229   27912 main.go:141] libmachine: (ha-739930-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 20:10:36.659536   27912 main.go:141] libmachine: (ha-739930-m03) KVM machine creation complete!
	I1204 20:10:36.659863   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:36.660403   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660622   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660802   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:10:36.660816   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetState
	I1204 20:10:36.662148   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:10:36.662160   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:10:36.662181   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:10:36.662187   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.664336   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664681   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.664694   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664829   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.664988   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665140   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665284   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.665446   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.665639   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.665651   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:10:36.774558   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:36.774575   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:10:36.774582   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.777253   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777655   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.777682   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777862   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.778048   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778224   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778333   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.778478   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.778662   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.778673   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:10:36.891601   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:10:36.891668   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:10:36.891681   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:10:36.891691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.891891   27912 buildroot.go:166] provisioning hostname "ha-739930-m03"
	I1204 20:10:36.891918   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.892100   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.894477   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.894866   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.894903   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.895026   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.895181   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895327   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895457   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.895582   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.895780   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.895798   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m03 && echo "ha-739930-m03" | sudo tee /etc/hostname
	I1204 20:10:37.022149   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m03
	
	I1204 20:10:37.022188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.024859   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.025324   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025555   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.025739   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.025923   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.026044   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.026196   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.026355   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.026371   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:10:37.143730   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:37.143754   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:10:37.143777   27912 buildroot.go:174] setting up certificates
	I1204 20:10:37.143788   27912 provision.go:84] configureAuth start
	I1204 20:10:37.143795   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:37.144053   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:37.146742   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147064   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.147095   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147234   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.149352   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149692   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.149719   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149832   27912 provision.go:143] copyHostCerts
	I1204 20:10:37.149875   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.149914   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:10:37.149926   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.150010   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:10:37.150120   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150164   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:10:37.150175   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150216   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:10:37.150301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150325   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:10:37.150331   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150367   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:10:37.150468   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m03 san=[127.0.0.1 192.168.39.176 ha-739930-m03 localhost minikube]
	I1204 20:10:37.504595   27912 provision.go:177] copyRemoteCerts
	I1204 20:10:37.504652   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:10:37.504676   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.507572   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.507995   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.508023   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.508251   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.508469   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.508628   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.508752   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:37.592737   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:10:37.592815   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:10:37.614702   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:10:37.614759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:10:37.636793   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:10:37.636856   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 20:10:37.657514   27912 provision.go:87] duration metric: took 513.715697ms to configureAuth
	I1204 20:10:37.657537   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:10:37.657776   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:37.657846   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.660375   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660716   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.660743   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660915   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.661101   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661283   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661394   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.661530   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.661715   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.661731   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:10:37.909620   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:10:37.909653   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:10:37.909661   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetURL
	I1204 20:10:37.911012   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using libvirt version 6000000
	I1204 20:10:37.913430   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913836   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.913865   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913996   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:10:37.914009   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:10:37.914014   27912 client.go:171] duration metric: took 25.723578899s to LocalClient.Create
	I1204 20:10:37.914034   27912 start.go:167] duration metric: took 25.723643031s to libmachine.API.Create "ha-739930"
	I1204 20:10:37.914045   27912 start.go:293] postStartSetup for "ha-739930-m03" (driver="kvm2")
	I1204 20:10:37.914058   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:10:37.914082   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:37.914308   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:10:37.914329   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.916698   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917013   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.917037   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917163   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.917355   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.917507   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.917647   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.000720   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:10:38.004659   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:10:38.004677   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:10:38.004732   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:10:38.004797   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:10:38.004805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:10:38.004881   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:10:38.014138   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:38.035007   27912 start.go:296] duration metric: took 120.952939ms for postStartSetup
	I1204 20:10:38.035043   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:38.035625   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.038045   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038404   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.038431   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038707   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:38.038928   27912 start.go:128] duration metric: took 25.86768393s to createHost
	I1204 20:10:38.038955   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.040921   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041241   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.041260   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041384   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.041567   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041725   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041870   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.042033   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:38.042234   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:38.042247   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:10:38.147467   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343038.125898138
	
	I1204 20:10:38.147487   27912 fix.go:216] guest clock: 1733343038.125898138
	I1204 20:10:38.147494   27912 fix.go:229] Guest: 2024-12-04 20:10:38.125898138 +0000 UTC Remote: 2024-12-04 20:10:38.038942767 +0000 UTC m=+146.136314147 (delta=86.955371ms)
	I1204 20:10:38.147507   27912 fix.go:200] guest clock delta is within tolerance: 86.955371ms
	I1204 20:10:38.147511   27912 start.go:83] releasing machines lock for "ha-739930-m03", held for 25.976405222s
	I1204 20:10:38.147527   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.147758   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.150388   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.150780   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.150809   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.153038   27912 out.go:177] * Found network options:
	I1204 20:10:38.154623   27912 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.216
	W1204 20:10:38.155949   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.155970   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.155981   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156494   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156668   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156762   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:10:38.156817   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	W1204 20:10:38.156874   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.156896   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.156981   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:10:38.157003   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.159414   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159669   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159823   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.159847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159966   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160094   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.160122   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.160127   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160279   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160293   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160410   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.160424   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160525   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160650   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.394150   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:10:38.401145   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:10:38.401209   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:10:38.417195   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:10:38.417223   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:10:38.417296   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:10:38.435131   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:10:38.448563   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:10:38.448618   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:10:38.461725   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:10:38.474727   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:10:38.588798   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:10:38.745587   27912 docker.go:233] disabling docker service ...
	I1204 20:10:38.745653   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:10:38.759235   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:10:38.771608   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:10:38.877832   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:10:38.982502   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:10:38.995491   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:10:39.012043   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:10:39.012100   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.021299   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:10:39.021358   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.030541   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.039631   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.048551   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:10:39.058773   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.068061   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.083733   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.092600   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:10:39.101297   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:10:39.101340   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:10:39.113156   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:10:39.122303   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:39.227598   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:10:39.312250   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:10:39.312323   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:10:39.316600   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:10:39.316650   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:10:39.320258   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:10:39.357732   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:10:39.357795   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.390225   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.419008   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:10:39.420400   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:10:39.421790   27912 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.216
	I1204 20:10:39.423169   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:39.425979   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426437   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:39.426466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426672   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:10:39.431086   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:39.443488   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:10:39.443719   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:39.443987   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.444059   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.459062   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I1204 20:10:39.459454   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.459962   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.459982   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.460287   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.460468   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:10:39.462100   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:39.462434   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.462472   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.476580   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I1204 20:10:39.476947   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.477280   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.477302   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.477596   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.477759   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:39.477901   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.176
	I1204 20:10:39.477913   27912 certs.go:194] generating shared ca certs ...
	I1204 20:10:39.477926   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.478032   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:10:39.478067   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:10:39.478076   27912 certs.go:256] generating profile certs ...
	I1204 20:10:39.478140   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:10:39.478162   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8
	I1204 20:10:39.478183   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:10:39.647686   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 ...
	I1204 20:10:39.647712   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8: {Name:mka45902bb26beb0e72f217dc87741ab3309d928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.647887   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 ...
	I1204 20:10:39.647910   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8: {Name:mk0280d80935ba52cb98acc5d6236d25a3a3095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.648008   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:10:39.648187   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:10:39.648361   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:10:39.648383   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:10:39.648403   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:10:39.648422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:10:39.648440   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:10:39.648458   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:10:39.648475   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:10:39.648493   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:10:39.663476   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:10:39.663545   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:10:39.663584   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:10:39.663595   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:10:39.663616   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:10:39.663649   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:10:39.663681   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:10:39.663737   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:39.663769   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:39.663786   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:10:39.663805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:10:39.663843   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:39.666431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666764   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:39.666781   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666946   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:39.667122   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:39.667283   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:39.667442   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:39.739814   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:10:39.744522   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:10:39.755922   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:10:39.759927   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:10:39.770702   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:10:39.775183   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:10:39.787784   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:10:39.792674   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:10:39.805368   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:10:39.809503   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:10:39.828088   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:10:39.832824   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:10:39.844859   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:10:39.869334   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:10:39.893785   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:10:39.916818   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:10:39.939176   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 20:10:39.961163   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:10:39.983006   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:10:40.005681   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:10:40.028546   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:10:40.051809   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:10:40.074413   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:10:40.097808   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:10:40.113924   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:10:40.131147   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:10:40.149216   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:10:40.166655   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:10:40.182489   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:10:40.200001   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:10:40.221223   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:10:40.226405   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:10:40.235863   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239603   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239672   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.245186   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:10:40.256188   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:10:40.266724   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271086   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271119   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.276304   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:10:40.286222   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:10:40.297060   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301192   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301236   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.307282   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:10:40.317487   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:10:40.320982   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:10:40.321045   27912 kubeadm.go:934] updating node {m03 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1204 20:10:40.321144   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:10:40.321175   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:10:40.321208   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:10:40.335360   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:10:40.335431   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:10:40.335468   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.344356   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:10:40.344387   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.352481   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:10:40.352490   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 20:10:40.352500   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352520   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 20:10:40.352529   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:40.352538   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.352555   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352614   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.357211   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:10:40.357232   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:10:40.373861   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:10:40.373888   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:10:40.393917   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.394019   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.435438   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:10:40.435480   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:10:41.204864   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:10:41.214084   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:10:41.230130   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:10:41.245590   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:10:41.261184   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:10:41.264917   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:41.276834   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:41.407860   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:10:41.425834   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:41.426358   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:41.426432   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:41.444259   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I1204 20:10:41.444841   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:41.445793   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:41.445819   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:41.446152   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:41.446372   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:41.446554   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:10:41.446705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:10:41.446730   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:41.449938   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450354   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:41.450382   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450525   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:41.450704   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:41.450893   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:41.451051   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:41.603198   27912 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:41.603245   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443"
	I1204 20:11:02.285051   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443": (20.681780468s)
	I1204 20:11:02.285099   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:11:02.929343   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m03 minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:11:03.053541   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:11:03.177213   27912 start.go:319] duration metric: took 21.7306554s to joinCluster
	I1204 20:11:03.177299   27912 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:11:03.177647   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:11:03.178583   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:11:03.179869   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:11:03.436285   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:11:03.491544   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:11:03.491892   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:11:03.491978   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:11:03.492270   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:03.492369   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.492380   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.492391   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.492400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.496740   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:03.992695   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.992717   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.992725   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.992729   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.996010   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.493230   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.493265   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.493272   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.496716   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.992539   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.992561   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.992571   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.992577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.995936   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:05.493273   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.493300   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.493311   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.493317   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.497413   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:05.497897   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:05.993362   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.993385   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.993392   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.993397   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.996675   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.492587   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.492610   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.492620   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.492627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.495773   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.993310   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.993331   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.993339   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.993343   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.996864   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.492704   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.492741   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.492750   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.492754   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.496418   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.993375   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.993397   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.993404   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.993414   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.996601   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.997248   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:08.492707   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.492739   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.492752   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.492757   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.498736   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:08.992522   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.992546   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.992554   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.992559   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.996681   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:09.492442   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.492462   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.492470   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.492475   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.496143   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:09.992900   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.992932   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.992939   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.992944   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.996453   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.492481   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.492499   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.492507   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.492513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.496234   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.497174   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:10.992502   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.992525   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.992532   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.992553   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.995639   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.493014   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.493034   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.493042   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.493045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.496066   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.992460   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.992481   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.992488   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.992492   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.995782   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.492536   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.492559   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.492567   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.492575   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.496512   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.993486   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.993507   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.993515   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.993521   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.996929   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.997503   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:13.492705   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.492728   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.492735   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.492739   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.495958   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:13.993195   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.993235   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.993243   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.993248   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.996458   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:14.492667   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.492687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.492695   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.492700   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.496760   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:14.992634   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.992657   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.992665   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.992668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.996174   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.492623   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.492645   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.492651   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.492656   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.496189   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.496993   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:15.993412   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.993432   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.993438   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.993442   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.996343   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:16.492477   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.492500   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.492508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.492512   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.495796   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:16.993504   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.993533   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.993545   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.993552   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.996589   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.492614   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.492637   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.492649   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.492654   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.496032   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.992928   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.992951   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.992958   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.992961   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.996749   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.997385   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:18.492596   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.492617   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.492625   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.492629   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.495562   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:18.992579   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.992604   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.992612   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.992616   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.996070   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.493093   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.493113   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.493121   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.493126   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.992762   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.992788   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.992796   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.992802   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.996757   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.997645   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:20.493018   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.493038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.493045   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.493049   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.496165   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:20.993181   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.993203   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.993211   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.993214   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.996266   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.493006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.493035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.493044   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.493050   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.497703   27912 node_ready.go:49] node "ha-739930-m03" has status "Ready":"True"
	I1204 20:11:21.497723   27912 node_ready.go:38] duration metric: took 18.005431822s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:21.497731   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:21.497795   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:21.497804   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.497811   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.497815   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.504465   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:21.510955   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.511029   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:11:21.511038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.511050   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.511058   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.514034   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.514600   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.514614   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.514622   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.514627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.517241   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.517672   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.517688   27912 pod_ready.go:82] duration metric: took 6.709809ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517707   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517765   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:11:21.517772   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.517781   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.517791   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.520563   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.521278   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.521296   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.521307   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.521313   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.523869   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.524405   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.524426   27912 pod_ready.go:82] duration metric: took 6.708809ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524435   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524489   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:11:21.524498   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.524504   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.524510   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.526682   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.527365   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.527393   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.527401   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.527410   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530023   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.530721   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.530744   27912 pod_ready.go:82] duration metric: took 6.30261ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530758   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530832   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:11:21.530844   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.530856   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530866   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.533485   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.534074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:21.534089   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.534098   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.534104   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.536315   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.536771   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.536789   27912 pod_ready.go:82] duration metric: took 6.023339ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.536798   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.693086   27912 request.go:632] Waited for 156.229013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693178   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693187   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.693199   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.693211   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.696805   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.893066   27912 request.go:632] Waited for 195.292666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893122   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893140   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.893148   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.893151   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.896289   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.896776   27912 pod_ready.go:93] pod "etcd-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.896798   27912 pod_ready.go:82] duration metric: took 359.993172ms for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.896822   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.094080   27912 request.go:632] Waited for 197.155628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094159   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094178   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.094195   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.094201   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.097388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.293809   27912 request.go:632] Waited for 194.988533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293864   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293871   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.293881   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.293886   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.297036   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.297688   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.297708   27912 pod_ready.go:82] duration metric: took 400.873563ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.297721   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.493772   27912 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493834   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493840   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.493847   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.493850   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.497525   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.693745   27912 request.go:632] Waited for 195.318737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693830   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693837   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.693844   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.693849   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.697438   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.697941   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.697959   27912 pod_ready.go:82] duration metric: took 400.231011ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.697969   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.894031   27912 request.go:632] Waited for 195.997225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894100   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894105   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.894113   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.894119   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.896928   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.093056   27912 request.go:632] Waited for 195.290507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093109   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093116   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.093125   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.093131   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.096071   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.096675   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.096695   27912 pod_ready.go:82] duration metric: took 398.72057ms for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.096706   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.293761   27912 request.go:632] Waited for 196.979038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293857   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293863   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.293870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.293877   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.297313   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.493595   27912 request.go:632] Waited for 195.358893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493645   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493652   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.493662   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.493668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.496860   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.497431   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.497447   27912 pod_ready.go:82] duration metric: took 400.733171ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.497457   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.693609   27912 request.go:632] Waited for 196.087422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693665   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693670   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.693677   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.693681   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.697816   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:23.893073   27912 request.go:632] Waited for 194.284611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893157   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.893173   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.893179   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.896273   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.896905   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.896921   27912 pod_ready.go:82] duration metric: took 399.455915ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.896931   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.094047   27912 request.go:632] Waited for 197.05537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094114   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094120   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.094128   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.094138   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.097347   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.293333   27912 request.go:632] Waited for 195.221509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293408   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293418   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.293429   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.293439   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.296348   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:24.296803   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.296819   27912 pod_ready.go:82] duration metric: took 399.882093ms for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.296828   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.493904   27912 request.go:632] Waited for 197.016726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493960   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.493967   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.493971   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.497694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.693075   27912 request.go:632] Waited for 194.571912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693130   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693135   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.693142   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.693146   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.696302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.696899   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.696919   27912 pod_ready.go:82] duration metric: took 400.084608ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.696928   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.893931   27912 request.go:632] Waited for 196.931451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894022   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.894043   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.894046   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.897046   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.093243   27912 request.go:632] Waited for 195.305694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093305   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093310   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.093318   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.093321   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.096337   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.096835   27912 pod_ready.go:93] pod "kube-proxy-r4895" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.096854   27912 pod_ready.go:82] duration metric: took 399.920087ms for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.096864   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.294085   27912 request.go:632] Waited for 197.134763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294155   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294164   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.294174   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.294181   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.297688   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.493811   27912 request.go:632] Waited for 195.37479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493896   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493902   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.493910   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.493914   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.497035   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.497776   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.497796   27912 pod_ready.go:82] duration metric: took 400.925065ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.497810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.693786   27912 request.go:632] Waited for 195.910848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693860   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.693866   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.693870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.697283   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.893336   27912 request.go:632] Waited for 195.363737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893392   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893398   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.893407   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.893417   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.896883   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.897527   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.897547   27912 pod_ready.go:82] duration metric: took 399.728095ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.897560   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.093716   27912 request.go:632] Waited for 196.07568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093770   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093775   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.093783   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.093787   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.097490   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:26.293677   27912 request.go:632] Waited for 195.380903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293724   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293729   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.293736   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.293740   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.296374   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.297059   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.297083   27912 pod_ready.go:82] duration metric: took 399.512498ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.297096   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.493619   27912 request.go:632] Waited for 196.449368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.493698   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.493708   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.496613   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.693570   27912 request.go:632] Waited for 196.314375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693652   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693664   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.693674   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.693683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.696474   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.697001   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.697020   27912 pod_ready.go:82] duration metric: took 399.916866ms for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.697032   27912 pod_ready.go:39] duration metric: took 5.199290508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:26.697048   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:11:26.697102   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:11:26.712884   27912 api_server.go:72] duration metric: took 23.535549754s to wait for apiserver process to appear ...
	I1204 20:11:26.712900   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:11:26.712916   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:11:26.717076   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:11:26.717125   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:11:26.717134   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.717141   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.717145   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.718054   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:11:26.718141   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:11:26.718158   27912 api_server.go:131] duration metric: took 5.25178ms to wait for apiserver health ...
	I1204 20:11:26.718165   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:11:26.893379   27912 request.go:632] Waited for 175.13636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893459   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.893466   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.893472   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.899023   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:26.905500   27912 system_pods.go:59] 24 kube-system pods found
	I1204 20:11:26.905525   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:26.905530   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:26.905534   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:26.905538   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:26.905541   27912 system_pods.go:61] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:26.905545   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:26.905548   27912 system_pods.go:61] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:26.905550   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:26.905554   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:26.905558   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:26.905564   27912 system_pods.go:61] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:26.905569   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:26.905574   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:26.905579   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:26.905588   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:26.905593   27912 system_pods.go:61] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:26.905602   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:26.905607   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:26.905612   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:26.905619   27912 system_pods.go:61] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:26.905622   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:26.905626   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:26.905630   27912 system_pods.go:61] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:26.905634   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:26.905640   27912 system_pods.go:74] duration metric: took 187.469575ms to wait for pod list to return data ...
	I1204 20:11:26.905660   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:11:27.093927   27912 request.go:632] Waited for 188.174644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093986   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093991   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.093998   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.094011   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.097761   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.097902   27912 default_sa.go:45] found service account: "default"
	I1204 20:11:27.097922   27912 default_sa.go:55] duration metric: took 192.253848ms for default service account to be created ...
	I1204 20:11:27.097933   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:11:27.293645   27912 request.go:632] Waited for 195.638628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293720   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293727   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.293736   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.293742   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.299871   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:27.306654   27912 system_pods.go:86] 24 kube-system pods found
	I1204 20:11:27.306676   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:27.306682   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:27.306686   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:27.306689   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:27.306692   27912 system_pods.go:89] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:27.306696   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:27.306699   27912 system_pods.go:89] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:27.306702   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:27.306705   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:27.306709   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:27.306714   27912 system_pods.go:89] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:27.306719   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:27.306724   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:27.306733   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:27.306742   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:27.306748   27912 system_pods.go:89] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:27.306756   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:27.306762   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:27.306770   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:27.306774   27912 system_pods.go:89] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:27.306780   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:27.306784   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:27.306787   27912 system_pods.go:89] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:27.306790   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:27.306796   27912 system_pods.go:126] duration metric: took 208.857473ms to wait for k8s-apps to be running ...
	I1204 20:11:27.306805   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:11:27.306853   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:11:27.321782   27912 system_svc.go:56] duration metric: took 14.969542ms WaitForService to wait for kubelet
	I1204 20:11:27.321804   27912 kubeadm.go:582] duration metric: took 24.144472529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:11:27.321820   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:11:27.493192   27912 request.go:632] Waited for 171.286703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493250   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.493262   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.493266   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.497192   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.498227   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498244   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498254   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498259   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498262   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498265   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498269   27912 node_conditions.go:105] duration metric: took 176.444491ms to run NodePressure ...
	I1204 20:11:27.498283   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:11:27.498303   27912 start.go:255] writing updated cluster config ...
	I1204 20:11:27.498580   27912 ssh_runner.go:195] Run: rm -f paused
	I1204 20:11:27.549391   27912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 20:11:27.551427   27912 out.go:177] * Done! kubectl is now configured to use "ha-739930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.253955108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f596bba2-1592-4388-bb76-c53a34c247fd name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.255018872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=475ec8a4-f53c-46a5-a596-368b98f0d047 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.255490289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343305255464461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=475ec8a4-f53c-46a5-a596-368b98f0d047 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.256029175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9197c752-f55e-456e-9e73-8b96bcfb16cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.256095372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9197c752-f55e-456e-9e73-8b96bcfb16cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.256337085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9197c752-f55e-456e-9e73-8b96bcfb16cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.297311008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa7fd0b0-4949-4727-a0bf-a1e87b4060bf name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.297822531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa7fd0b0-4949-4727-a0bf-a1e87b4060bf name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.300488416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3451a038-dc34-42a2-a8b2-605510cffaca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.301126568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343305301099017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3451a038-dc34-42a2-a8b2-605510cffaca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.301621881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bcc3dbb-841a-4055-99b0-be8fad31db3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.301679668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bcc3dbb-841a-4055-99b0-be8fad31db3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.302455023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bcc3dbb-841a-4055-99b0-be8fad31db3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.334723088Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7df92286-3af6-4463-b129-e9763282214d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.335061090Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-gg7dr,Uid:a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733343090316440136,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T20:11:30.007360253Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kbgr,Uid:662019c2-29e8-4437-8b14-f9fbf1268d03,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1733342953665400371,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T20:09:13.338371531Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342953640357411,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-04T20:09:13.329853410Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8kztf,Uid:40363110-9dbd-47ae-8aec-70630543d005,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733342953633096498,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40363110-9dbd-47ae-8aec-70630543d005,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T20:09:13.323030749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&PodSandboxMetadata{Name:kube-proxy-tlhfv,Uid:2f01e7f6-5af2-490b-8a2c-266e1701c102,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342938439250528,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-12-04T20:08:58.105472306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&PodSandboxMetadata{Name:kindnet-8wsgw,Uid:d8bc54cd-d100-43fa-bda8-28ee9b58b947,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342938432349447,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T20:08:58.114654576Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-739930,Uid:7b071552f9356e83d17c476e03918fe9,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733342927121955135,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7b071552f9356e83d17c476e03918fe9,kubernetes.io/config.seen: 2024-12-04T20:08:46.449587612Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-739930,Uid:25b5d213282d4e3d0b17f56770f58750,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342927118400062,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.183:8443,kubernetes.io/config.hash: 25b5d213282d4e3d0b17f56770f58750,kubernetes.io/config.seen: 2024-12-04T20:08:46.449584935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&PodSandboxMetadata{Name:etcd-ha-739930,Uid:af968bcb5bb689c598a55bb96c345514,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342927112996937,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.183:2379,kubernetes.io/config.hash: af968bcb5bb689c598a55bb96c345514,kubernetes.io/config.seen: 2024-12-04T20:08:46.449583484Z,kubernetes.io/config.source: fi
le,},RuntimeHandler:,},&PodSandbox{Id:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-739930,Uid:e85517d76879ff3f468d156333aefa2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342927109606752,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{kubernetes.io/config.hash: e85517d76879ff3f468d156333aefa2d,kubernetes.io/config.seen: 2024-12-04T20:08:46.449579027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-739930,Uid:9b85df04725e54b66c583c1e4307b02b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733342927098594466,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b85df04725e54b66c583c1e4307b02b,kubernetes.io/config.seen: 2024-12-04T20:08:46.449586187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7df92286-3af6-4463-b129-e9763282214d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.335937489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1b8f8bc-42f7-4204-9020-69f47ded7d35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.335993557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1b8f8bc-42f7-4204-9020-69f47ded7d35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.336219128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1b8f8bc-42f7-4204-9020-69f47ded7d35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.346884644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=754bf6db-c035-42cf-8e8b-a30ec717cb31 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.346947072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=754bf6db-c035-42cf-8e8b-a30ec717cb31 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.348507577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f3fc9c2-6fc0-4d23-84fb-5d353e1a262c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.349113043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343305349090842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f3fc9c2-6fc0-4d23-84fb-5d353e1a262c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.349580161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b63a5c7-87f7-4d1e-b771-cb044c9532fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.349649604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b63a5c7-87f7-4d1e-b771-cb044c9532fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:05 ha-739930 crio[665]: time="2024-12-04 20:15:05.349920534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b63a5c7-87f7-4d1e-b771-cb044c9532fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c09d55fbc3f94       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8470389e19e5b       busybox-7dff88458-gg7dr
	92f0436c068d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   fdd28652924af       coredns-7c65d6cfc9-7kbgr
	ab16b32e60a72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a639b811aff3b       coredns-7c65d6cfc9-8kztf
	a1496ef67bc6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   235aa20e54db7       storage-provisioner
	f38276fe657c7       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   22f273a6fc170       kindnet-8wsgw
	8643b775b5352       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   30611e2a6fdcc       kube-proxy-tlhfv
	b4a22468ef5bd       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   5f8113a27db24       kube-vip-ha-739930
	325ac1400e34a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a0e82c5e83a21       kube-scheduler-ha-739930
	1fdab5e7f0c11       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   83caff9199eb8       kube-apiserver-ha-739930
	52571ff875ebe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   91df0913316d5       etcd-ha-739930
	c2343748d9b3c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   bccd9e2c06872       kube-controller-manager-ha-739930
	
	
	==> coredns [92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50] <==
	[INFO] 10.244.1.2:60420 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000998s
	[INFO] 10.244.2.2:43602 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198643s
	[INFO] 10.244.2.2:55688 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004203463s
	[INFO] 10.244.2.2:58147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017975s
	[INFO] 10.244.0.4:34390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142716s
	[INFO] 10.244.0.4:33345 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126491s
	[INFO] 10.244.1.2:52771 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534902s
	[INFO] 10.244.1.2:50377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155393s
	[INFO] 10.244.1.2:57617 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204758s
	[INFO] 10.244.1.2:33315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087548s
	[INFO] 10.244.1.2:43721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138913s
	[INFO] 10.244.2.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128945s
	[INFO] 10.244.2.2:39846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141449s
	[INFO] 10.244.0.4:49972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079931s
	[INFO] 10.244.0.4:54249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163883s
	[INFO] 10.244.1.2:50096 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116516s
	[INFO] 10.244.1.2:45073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132387s
	[INFO] 10.244.2.2:49399 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153554s
	[INFO] 10.244.2.2:59645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182375s
	[INFO] 10.244.0.4:58720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128913s
	[INFO] 10.244.0.4:43247 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014397s
	[INFO] 10.244.0.4:41555 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088414s
	[INFO] 10.244.0.4:43722 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065939s
	[INFO] 10.244.1.2:45770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102411s
	[INFO] 10.244.1.2:50474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112012s
	
	
	==> coredns [ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac] <==
	[INFO] 10.244.1.2:40314 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016375s
	[INFO] 10.244.2.2:49280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000323723s
	[INFO] 10.244.2.2:39711 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206446s
	[INFO] 10.244.2.2:58438 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003929293s
	[INFO] 10.244.2.2:51399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159908s
	[INFO] 10.244.2.2:39775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142713s
	[INFO] 10.244.0.4:59240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001795102s
	[INFO] 10.244.0.4:58038 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108734s
	[INFO] 10.244.0.4:54479 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222678s
	[INFO] 10.244.0.4:48445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001109511s
	[INFO] 10.244.0.4:56707 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120069s
	[INFO] 10.244.0.4:44194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:36003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139108s
	[INFO] 10.244.1.2:48175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090843s
	[INFO] 10.244.1.2:54736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072028s
	[INFO] 10.244.2.2:41244 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110768s
	[INFO] 10.244.2.2:58717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088169s
	[INFO] 10.244.0.4:52576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161976s
	[INFO] 10.244.0.4:50935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010896s
	[INFO] 10.244.1.2:40433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160052s
	[INFO] 10.244.1.2:48574 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094093s
	[INFO] 10.244.2.2:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131379s
	[INFO] 10.244.2.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000289898s
	[INFO] 10.244.1.2:59160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148396s
	[INFO] 10.244.1.2:49691 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140675s
	
	
	==> describe nodes <==
	Name:               ha-739930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:08:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-739930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a862467bfb34c3ba59a1a6944c8e8ad
	  System UUID:                4a862467-bfb3-4c3b-a59a-1a6944c8e8ad
	  Boot ID:                    88a12a5a-b072-479a-8944-b6767cbdf4f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gg7dr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-7c65d6cfc9-7kbgr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m7s
	  kube-system                 coredns-7c65d6cfc9-8kztf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m7s
	  kube-system                 etcd-ha-739930                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m12s
	  kube-system                 kindnet-8wsgw                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m7s
	  kube-system                 kube-apiserver-ha-739930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-ha-739930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-proxy-tlhfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-scheduler-ha-739930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-vip-ha-739930                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m6s   kube-proxy       
	  Normal  Starting                 6m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m12s  kubelet          Node ha-739930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s  kubelet          Node ha-739930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s  kubelet          Node ha-739930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m8s   node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  NodeReady                5m52s  kubelet          Node ha-739930 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  RegisteredNode           3m57s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	
	
	Name:               ha-739930-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:09:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:12:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-739930-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 309500ff1508404f8337a542897e4a63
	  System UUID:                309500ff-1508-404f-8337-a542897e4a63
	  Boot ID:                    abc62bfe-1148-4265-a781-5ad8762ade09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kx56q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-739930-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m17s
	  kube-system                 kindnet-z6v65                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-739930-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-ha-739930-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-gtw7d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-739930-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-vip-ha-739930-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-739930-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-739930-m02 status is now: NodeNotReady
	
	
	Name:               ha-739930-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:11:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-739930-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eddf849e101457c8f603f9f7bb068e3
	  System UUID:                7eddf849-e101-457c-8f60-3f9f7bb068e3
	  Boot ID:                    94b82cc0-8208-45bb-85df-9fba3000dbef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9pz7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-739930-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-d2rvr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-739930-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-739930-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-r4895                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-739930-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-739930-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m6s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m6s)  kubelet          Node ha-739930-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m6s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	
	
	Name:               ha-739930-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_12_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:12:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:14:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-739930-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caea6c34853a432f8606c2c81d5d7e80
	  System UUID:                caea6c34-853a-432f-8606-c2c81d5d7e80
	  Boot ID:                    64cbf16d-0924-4d4e-bb2e-e3fb57ad6cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2l856       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-2dnzj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)      kubelet          Node ha-739930-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)      kubelet          Node ha-739930-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)      kubelet          Node ha-739930-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           2m56s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  NodeReady                2m40s (x2 over 2m40s)  kubelet          Node ha-739930-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038376] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818831] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961468] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.569504] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.583210] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.060308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060487] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.188680] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114168] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.247975] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.760825] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.102978] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066053] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.507773] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085425] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.435723] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 4 20:09] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.420810] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246] <==
	{"level":"warn","ts":"2024-12-04T20:15:05.608986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.617123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.625197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.629523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.644999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.655321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.662462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.666241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.669852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.676839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.686053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.690463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.695853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.700430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.704201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.708839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.715535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.725738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.732563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.737155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.740639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.745362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.751533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.757996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:05.809286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:05 up 6 min,  0 users,  load average: 0.47, 0.28, 0.12
	Linux ha-739930 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7] <==
	I1204 20:14:32.871845       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:14:42.875921       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:14:42.876108       1 main.go:301] handling current node
	I1204 20:14:42.876141       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:14:42.876164       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:14:42.876442       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:14:42.876476       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:14:42.876608       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:14:42.876632       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:14:52.876836       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:14:52.876889       1 main.go:301] handling current node
	I1204 20:14:52.876924       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:14:52.876933       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:14:52.877263       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:14:52.877287       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:14:52.877494       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:14:52.877511       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:02.869044       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:02.869284       1 main.go:301] handling current node
	I1204 20:15:02.869336       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:02.869343       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:02.869633       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:02.869654       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:02.869898       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:02.869919       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3] <==
	I1204 20:08:52.109573       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 20:08:52.115869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1204 20:08:52.116893       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 20:08:52.120949       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:08:52.319935       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 20:08:53.401361       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 20:08:53.418287       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 20:08:53.427159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 20:08:57.975080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 20:08:58.071170       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1204 20:11:33.595040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51898: use of closed network connection
	E1204 20:11:33.787246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51926: use of closed network connection
	E1204 20:11:33.961220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51944: use of closed network connection
	E1204 20:11:34.139353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51958: use of closed network connection
	E1204 20:11:34.492487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51978: use of closed network connection
	E1204 20:11:34.660669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51994: use of closed network connection
	E1204 20:11:34.825641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52014: use of closed network connection
	E1204 20:11:35.000850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52034: use of closed network connection
	E1204 20:11:35.295050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52074: use of closed network connection
	E1204 20:11:35.467188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52090: use of closed network connection
	E1204 20:11:35.632176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52096: use of closed network connection
	E1204 20:11:35.802340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52124: use of closed network connection
	E1204 20:11:35.976054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52130: use of closed network connection
	E1204 20:11:36.156331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52148: use of closed network connection
	W1204 20:13:02.138009       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.176 192.168.39.183]
	
	
	==> kube-controller-manager [c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4] <==
	I1204 20:12:05.098063       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-739930-m04" podCIDRs=["10.244.3.0/24"]
	I1204 20:12:05.098353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.099501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.129202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.212844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.605704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:07.219432       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-739930-m04"
	I1204 20:12:07.250173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:08.816441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.034862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.114294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.193601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:15.131792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.187809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:12:25.187897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.200602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:27.234376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:35.291257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:13:22.261174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:13:22.262013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.294239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.349815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.422518ms"
	I1204 20:13:22.353121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.184µs"
	I1204 20:13:23.918547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:27.468391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	
	
	==> kube-proxy [8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:08:59.055359       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:08:59.074919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1204 20:08:59.075054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:08:59.106971       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:08:59.107053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:08:59.107091       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:08:59.110117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:08:59.110853       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:08:59.110911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:08:59.113929       1 config.go:328] "Starting node config controller"
	I1204 20:08:59.113988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:08:59.114597       1 config.go:199] "Starting service config controller"
	I1204 20:08:59.114621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:08:59.114931       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:08:59.114959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:08:59.214563       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:08:59.215004       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:08:59.216196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7] <==
	E1204 20:08:51.687075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.698835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 20:08:51.698950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.756911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 20:08:51.757061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.761020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:08:51.761159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 20:08:54.377656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:11:28.468555       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e79c51d4-80e5-490b-906e-e376195d820e" pod="default/busybox-7dff88458-4zmkp" assumedNode="ha-739930-m02" currentNode="ha-739930-m03"
	E1204 20:11:28.510519       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m03"
	E1204 20:11:28.510990       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e79c51d4-80e5-490b-906e-e376195d820e(default/busybox-7dff88458-4zmkp) was assumed on ha-739930-m03 but assigned to ha-739930-m02" pod="default/busybox-7dff88458-4zmkp"
	E1204 20:11:28.511176       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" pod="default/busybox-7dff88458-4zmkp"
	I1204 20:11:28.511316       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m02"
	I1204 20:11:28.544933       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="5411c4b8-6cb8-493d-8ce1-adcf557c68bc" pod="default/busybox-7dff88458-b94b5" assumedNode="ha-739930" currentNode="ha-739930-m03"
	E1204 20:11:28.557489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-b94b5" node="ha-739930-m03"
	E1204 20:11:28.557560       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5411c4b8-6cb8-493d-8ce1-adcf557c68bc(default/busybox-7dff88458-b94b5) was assumed on ha-739930-m03 but assigned to ha-739930" pod="default/busybox-7dff88458-b94b5"
	E1204 20:11:28.557587       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-b94b5"
	I1204 20:11:28.557614       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-b94b5" node="ha-739930"
	E1204 20:11:30.014314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:11:30.014481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5(default/busybox-7dff88458-gg7dr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gg7dr"
	E1204 20:11:30.015337       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-gg7dr"
	I1204 20:11:30.015401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:12:05.139969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	E1204 20:12:05.140096       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" pod="kube-system/kindnet-kswc6"
	I1204 20:12:05.140125       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	
	
	==> kubelet <==
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:13:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462332    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462375    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465094    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465133    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.466702    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.467091    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469001    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469280    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471311    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471582    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.473913    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.474005    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.358128    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476132    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476169    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.477995    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.478354    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1204 20:15:10.135714   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.390540711s)
ha_test.go:415: expected profile "ha-739930" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-739930\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-739930\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-739930\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.183\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.216\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.176\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.230\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubev
irt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker
\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.376287562s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m03_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:08:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:08:11.939431   27912 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:08:11.939545   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939555   27912 out.go:358] Setting ErrFile to fd 2...
	I1204 20:08:11.939562   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939744   27912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:08:11.940314   27912 out.go:352] Setting JSON to false
	I1204 20:08:11.941189   27912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3042,"bootTime":1733339850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:08:11.941293   27912 start.go:139] virtualization: kvm guest
	I1204 20:08:11.944336   27912 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:08:11.945852   27912 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:08:11.945847   27912 notify.go:220] Checking for updates...
	I1204 20:08:11.948662   27912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:08:11.950105   27912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:11.951395   27912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:11.952616   27912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:08:11.953838   27912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:08:11.955060   27912 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:08:11.990494   27912 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 20:08:11.991825   27912 start.go:297] selected driver: kvm2
	I1204 20:08:11.991844   27912 start.go:901] validating driver "kvm2" against <nil>
	I1204 20:08:11.991856   27912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:08:11.992661   27912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:11.992744   27912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:08:12.008005   27912 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:08:12.008178   27912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 20:08:12.008532   27912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:08:12.008571   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:12.008627   27912 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 20:08:12.008639   27912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 20:08:12.008710   27912 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:12.008840   27912 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:12.010621   27912 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:08:12.011905   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:12.011946   27912 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:08:12.011958   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:12.012045   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:12.012061   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:12.012439   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:12.012463   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json: {Name:mk7402f769abcec1c18cda99e23fa60ffac7b3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:12.012602   27912 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:12.012630   27912 start.go:364] duration metric: took 16.073µs to acquireMachinesLock for "ha-739930"
	I1204 20:08:12.012648   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:12.012705   27912 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 20:08:12.014265   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:12.014396   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:12.014435   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:12.028697   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I1204 20:08:12.029103   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:12.029651   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:12.029671   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:12.029950   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:12.030110   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:12.030242   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:12.030391   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:12.030413   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:12.030437   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:12.030469   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030485   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030532   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:12.030550   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030563   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030580   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:12.030594   27912 main.go:141] libmachine: (ha-739930) Calling .PreCreateCheck
	I1204 20:08:12.030896   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:12.031303   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:12.031315   27912 main.go:141] libmachine: (ha-739930) Calling .Create
	I1204 20:08:12.031447   27912 main.go:141] libmachine: (ha-739930) Creating KVM machine...
	I1204 20:08:12.032790   27912 main.go:141] libmachine: (ha-739930) DBG | found existing default KVM network
	I1204 20:08:12.033408   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.033271   27935 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I1204 20:08:12.033431   27912 main.go:141] libmachine: (ha-739930) DBG | created network xml: 
	I1204 20:08:12.033442   27912 main.go:141] libmachine: (ha-739930) DBG | <network>
	I1204 20:08:12.033450   27912 main.go:141] libmachine: (ha-739930) DBG |   <name>mk-ha-739930</name>
	I1204 20:08:12.033465   27912 main.go:141] libmachine: (ha-739930) DBG |   <dns enable='no'/>
	I1204 20:08:12.033475   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033484   27912 main.go:141] libmachine: (ha-739930) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 20:08:12.033497   27912 main.go:141] libmachine: (ha-739930) DBG |     <dhcp>
	I1204 20:08:12.033526   27912 main.go:141] libmachine: (ha-739930) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 20:08:12.033560   27912 main.go:141] libmachine: (ha-739930) DBG |     </dhcp>
	I1204 20:08:12.033571   27912 main.go:141] libmachine: (ha-739930) DBG |   </ip>
	I1204 20:08:12.033582   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033602   27912 main.go:141] libmachine: (ha-739930) DBG | </network>
	I1204 20:08:12.033619   27912 main.go:141] libmachine: (ha-739930) DBG | 
	I1204 20:08:12.038715   27912 main.go:141] libmachine: (ha-739930) DBG | trying to create private KVM network mk-ha-739930 192.168.39.0/24...
	I1204 20:08:12.104228   27912 main.go:141] libmachine: (ha-739930) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.104263   27912 main.go:141] libmachine: (ha-739930) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:12.104273   27912 main.go:141] libmachine: (ha-739930) DBG | private KVM network mk-ha-739930 192.168.39.0/24 created
	I1204 20:08:12.104290   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.104148   27935 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.104318   27912 main.go:141] libmachine: (ha-739930) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:12.357869   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.357760   27935 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa...
	I1204 20:08:12.476934   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476798   27935 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk...
	I1204 20:08:12.476961   27912 main.go:141] libmachine: (ha-739930) DBG | Writing magic tar header
	I1204 20:08:12.476973   27912 main.go:141] libmachine: (ha-739930) DBG | Writing SSH key tar header
	I1204 20:08:12.476980   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476911   27935 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.476989   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930
	I1204 20:08:12.477071   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:12.477126   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 (perms=drwx------)
	I1204 20:08:12.477140   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.477159   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:12.477173   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:12.477183   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:12.477188   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home
	I1204 20:08:12.477199   27912 main.go:141] libmachine: (ha-739930) DBG | Skipping /home - not owner
	I1204 20:08:12.477241   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:12.477265   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:12.477280   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:12.477294   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:12.477311   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:12.477322   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:12.478077   27912 main.go:141] libmachine: (ha-739930) define libvirt domain using xml: 
	I1204 20:08:12.478098   27912 main.go:141] libmachine: (ha-739930) <domain type='kvm'>
	I1204 20:08:12.478108   27912 main.go:141] libmachine: (ha-739930)   <name>ha-739930</name>
	I1204 20:08:12.478120   27912 main.go:141] libmachine: (ha-739930)   <memory unit='MiB'>2200</memory>
	I1204 20:08:12.478128   27912 main.go:141] libmachine: (ha-739930)   <vcpu>2</vcpu>
	I1204 20:08:12.478137   27912 main.go:141] libmachine: (ha-739930)   <features>
	I1204 20:08:12.478144   27912 main.go:141] libmachine: (ha-739930)     <acpi/>
	I1204 20:08:12.478153   27912 main.go:141] libmachine: (ha-739930)     <apic/>
	I1204 20:08:12.478159   27912 main.go:141] libmachine: (ha-739930)     <pae/>
	I1204 20:08:12.478166   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478176   27912 main.go:141] libmachine: (ha-739930)   </features>
	I1204 20:08:12.478183   27912 main.go:141] libmachine: (ha-739930)   <cpu mode='host-passthrough'>
	I1204 20:08:12.478254   27912 main.go:141] libmachine: (ha-739930)   
	I1204 20:08:12.478278   27912 main.go:141] libmachine: (ha-739930)   </cpu>
	I1204 20:08:12.478290   27912 main.go:141] libmachine: (ha-739930)   <os>
	I1204 20:08:12.478313   27912 main.go:141] libmachine: (ha-739930)     <type>hvm</type>
	I1204 20:08:12.478326   27912 main.go:141] libmachine: (ha-739930)     <boot dev='cdrom'/>
	I1204 20:08:12.478335   27912 main.go:141] libmachine: (ha-739930)     <boot dev='hd'/>
	I1204 20:08:12.478344   27912 main.go:141] libmachine: (ha-739930)     <bootmenu enable='no'/>
	I1204 20:08:12.478354   27912 main.go:141] libmachine: (ha-739930)   </os>
	I1204 20:08:12.478361   27912 main.go:141] libmachine: (ha-739930)   <devices>
	I1204 20:08:12.478371   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='cdrom'>
	I1204 20:08:12.478384   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/boot2docker.iso'/>
	I1204 20:08:12.478394   27912 main.go:141] libmachine: (ha-739930)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:12.478401   27912 main.go:141] libmachine: (ha-739930)       <readonly/>
	I1204 20:08:12.478416   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478430   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='disk'>
	I1204 20:08:12.478442   27912 main.go:141] libmachine: (ha-739930)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:12.478457   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk'/>
	I1204 20:08:12.478467   27912 main.go:141] libmachine: (ha-739930)       <target dev='hda' bus='virtio'/>
	I1204 20:08:12.478475   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478490   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478503   27912 main.go:141] libmachine: (ha-739930)       <source network='mk-ha-739930'/>
	I1204 20:08:12.478512   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478520   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478530   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478542   27912 main.go:141] libmachine: (ha-739930)       <source network='default'/>
	I1204 20:08:12.478552   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478599   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478617   27912 main.go:141] libmachine: (ha-739930)     <serial type='pty'>
	I1204 20:08:12.478622   27912 main.go:141] libmachine: (ha-739930)       <target port='0'/>
	I1204 20:08:12.478628   27912 main.go:141] libmachine: (ha-739930)     </serial>
	I1204 20:08:12.478636   27912 main.go:141] libmachine: (ha-739930)     <console type='pty'>
	I1204 20:08:12.478641   27912 main.go:141] libmachine: (ha-739930)       <target type='serial' port='0'/>
	I1204 20:08:12.478650   27912 main.go:141] libmachine: (ha-739930)     </console>
	I1204 20:08:12.478654   27912 main.go:141] libmachine: (ha-739930)     <rng model='virtio'>
	I1204 20:08:12.478660   27912 main.go:141] libmachine: (ha-739930)       <backend model='random'>/dev/random</backend>
	I1204 20:08:12.478666   27912 main.go:141] libmachine: (ha-739930)     </rng>
	I1204 20:08:12.478671   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478674   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478679   27912 main.go:141] libmachine: (ha-739930)   </devices>
	I1204 20:08:12.478685   27912 main.go:141] libmachine: (ha-739930) </domain>
	I1204 20:08:12.478691   27912 main.go:141] libmachine: (ha-739930) 
	I1204 20:08:12.482962   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:1f:34:29 in network default
	I1204 20:08:12.483451   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:12.483468   27912 main.go:141] libmachine: (ha-739930) Ensuring networks are active...
	I1204 20:08:12.484073   27912 main.go:141] libmachine: (ha-739930) Ensuring network default is active
	I1204 20:08:12.484443   27912 main.go:141] libmachine: (ha-739930) Ensuring network mk-ha-739930 is active
	I1204 20:08:12.485051   27912 main.go:141] libmachine: (ha-739930) Getting domain xml...
	I1204 20:08:12.485709   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:13.663232   27912 main.go:141] libmachine: (ha-739930) Waiting to get IP...
	I1204 20:08:13.663928   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.664244   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.664289   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.664239   27935 retry.go:31] will retry after 311.107761ms: waiting for machine to come up
	I1204 20:08:13.976518   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.976875   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.976897   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.976832   27935 retry.go:31] will retry after 302.848525ms: waiting for machine to come up
	I1204 20:08:14.281431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.281818   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.281846   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.281773   27935 retry.go:31] will retry after 460.768304ms: waiting for machine to come up
	I1204 20:08:14.744364   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.744813   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.744835   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.744754   27935 retry.go:31] will retry after 399.590847ms: waiting for machine to come up
	I1204 20:08:15.146387   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.146887   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.146911   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.146850   27935 retry.go:31] will retry after 733.547268ms: waiting for machine to come up
	I1204 20:08:15.882052   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.882481   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.882509   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.882450   27935 retry.go:31] will retry after 598.816129ms: waiting for machine to come up
	I1204 20:08:16.483323   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:16.483724   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:16.483766   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:16.483669   27935 retry.go:31] will retry after 816.886511ms: waiting for machine to come up
	I1204 20:08:17.302385   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:17.302850   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:17.303157   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:17.303086   27935 retry.go:31] will retry after 1.092347228s: waiting for machine to come up
	I1204 20:08:18.397513   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:18.397955   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:18.397979   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:18.397908   27935 retry.go:31] will retry after 1.349280463s: waiting for machine to come up
	I1204 20:08:19.748591   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:19.749086   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:19.749107   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:19.749051   27935 retry.go:31] will retry after 1.929176971s: waiting for machine to come up
	I1204 20:08:21.681322   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:21.681787   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:21.681821   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:21.681719   27935 retry.go:31] will retry after 2.034104658s: waiting for machine to come up
	I1204 20:08:23.717496   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:23.717880   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:23.717910   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:23.717836   27935 retry.go:31] will retry after 2.982891394s: waiting for machine to come up
	I1204 20:08:26.703937   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:26.704406   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:26.704442   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:26.704358   27935 retry.go:31] will retry after 2.968408416s: waiting for machine to come up
	I1204 20:08:29.675768   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:29.676304   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:29.676332   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:29.676260   27935 retry.go:31] will retry after 5.520024319s: waiting for machine to come up
	I1204 20:08:35.199569   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200041   27912 main.go:141] libmachine: (ha-739930) Found IP for machine: 192.168.39.183
	I1204 20:08:35.200065   27912 main.go:141] libmachine: (ha-739930) Reserving static IP address...
	I1204 20:08:35.200092   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has current primary IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200437   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find host DHCP lease matching {name: "ha-739930", mac: "52:54:00:b9:91:f7", ip: "192.168.39.183"} in network mk-ha-739930
	I1204 20:08:35.268817   27912 main.go:141] libmachine: (ha-739930) Reserved static IP address: 192.168.39.183
	I1204 20:08:35.268847   27912 main.go:141] libmachine: (ha-739930) Waiting for SSH to be available...
	I1204 20:08:35.268856   27912 main.go:141] libmachine: (ha-739930) DBG | Getting to WaitForSSH function...
	I1204 20:08:35.271480   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271869   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.271895   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271987   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH client type: external
	I1204 20:08:35.272004   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa (-rw-------)
	I1204 20:08:35.272069   27912 main.go:141] libmachine: (ha-739930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:08:35.272087   27912 main.go:141] libmachine: (ha-739930) DBG | About to run SSH command:
	I1204 20:08:35.272103   27912 main.go:141] libmachine: (ha-739930) DBG | exit 0
	I1204 20:08:35.395351   27912 main.go:141] libmachine: (ha-739930) DBG | SSH cmd err, output: <nil>: 
	I1204 20:08:35.395650   27912 main.go:141] libmachine: (ha-739930) KVM machine creation complete!
	I1204 20:08:35.395986   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:35.396534   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396731   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396857   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:08:35.396871   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:35.398039   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:08:35.398051   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:08:35.398055   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:08:35.398060   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.400170   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400525   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.400571   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400650   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.400812   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.400979   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.401117   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.401289   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.401492   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.401507   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:08:35.502303   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.502340   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:08:35.502352   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.504752   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505142   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.505165   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505360   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.505545   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505676   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505789   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.505915   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.506073   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.506082   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:08:35.608173   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:08:35.608233   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:08:35.608240   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:08:35.608247   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608464   27912 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:08:35.608480   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608679   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.611354   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611746   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.611772   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611904   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.612062   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612200   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612312   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.612460   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.612630   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.612642   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:08:35.730422   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:08:35.730456   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.732817   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733139   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.733168   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733310   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.733480   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733651   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733802   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.733983   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.734154   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.734171   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:08:35.843780   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.843821   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:08:35.843865   27912 buildroot.go:174] setting up certificates
	I1204 20:08:35.843880   27912 provision.go:84] configureAuth start
	I1204 20:08:35.843894   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.844232   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:35.847046   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847366   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.847411   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847570   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.849830   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850112   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.850131   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850320   27912 provision.go:143] copyHostCerts
	I1204 20:08:35.850348   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850382   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:08:35.850391   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850460   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:08:35.850567   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850595   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:08:35.850604   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850645   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:08:35.850723   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850741   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:08:35.850748   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850772   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:08:35.850823   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:08:35.983720   27912 provision.go:177] copyRemoteCerts
	I1204 20:08:35.983786   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:08:35.983810   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.986241   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986583   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.986614   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986772   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.986960   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.987093   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.987240   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.068879   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:08:36.068950   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1204 20:08:36.091202   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:08:36.091259   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:08:36.112918   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:08:36.112998   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:08:36.134856   27912 provision.go:87] duration metric: took 290.963844ms to configureAuth
	I1204 20:08:36.134887   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:08:36.135063   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:36.135153   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.137760   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138113   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.138138   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138342   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.138505   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138658   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138779   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.138924   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.139114   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.139131   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:08:36.346218   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:08:36.346255   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:08:36.346283   27912 main.go:141] libmachine: (ha-739930) Calling .GetURL
	I1204 20:08:36.347448   27912 main.go:141] libmachine: (ha-739930) DBG | Using libvirt version 6000000
	I1204 20:08:36.349418   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349723   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.349742   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349920   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:08:36.349936   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:08:36.349943   27912 client.go:171] duration metric: took 24.3195237s to LocalClient.Create
	I1204 20:08:36.349963   27912 start.go:167] duration metric: took 24.319574814s to libmachine.API.Create "ha-739930"
	I1204 20:08:36.349976   27912 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:08:36.349991   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:08:36.350013   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.350205   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:08:36.350228   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.351979   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352286   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.352313   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352437   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.352594   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.352706   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.352816   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.432460   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:08:36.436012   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:08:36.436028   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:08:36.436089   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:08:36.436188   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:08:36.436201   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:08:36.436304   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:08:36.444678   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:36.467397   27912 start.go:296] duration metric: took 117.407014ms for postStartSetup
	I1204 20:08:36.467437   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:36.467977   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.470186   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470558   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.470586   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470798   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:36.470974   27912 start.go:128] duration metric: took 24.458260215s to createHost
	I1204 20:08:36.470996   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.472973   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473263   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.473284   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473418   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.473574   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473716   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473887   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.474035   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.474202   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.474217   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:08:36.575008   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342916.551867748
	
	I1204 20:08:36.575023   27912 fix.go:216] guest clock: 1733342916.551867748
	I1204 20:08:36.575030   27912 fix.go:229] Guest: 2024-12-04 20:08:36.551867748 +0000 UTC Remote: 2024-12-04 20:08:36.470986638 +0000 UTC m=+24.568358011 (delta=80.88111ms)
	I1204 20:08:36.575056   27912 fix.go:200] guest clock delta is within tolerance: 80.88111ms
	I1204 20:08:36.575080   27912 start.go:83] releasing machines lock for "ha-739930", held for 24.56242194s
	I1204 20:08:36.575103   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.575310   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.577787   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578087   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.578125   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578233   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578645   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578807   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578883   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:08:36.578924   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.579001   27912 ssh_runner.go:195] Run: cat /version.json
	I1204 20:08:36.579018   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.581456   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581787   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.581809   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581864   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581930   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582100   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582239   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582276   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.582299   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.582396   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.582566   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582713   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582863   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582989   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.675618   27912 ssh_runner.go:195] Run: systemctl --version
	I1204 20:08:36.681185   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:08:36.833908   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:08:36.839964   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:08:36.840024   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:08:36.855758   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:08:36.855780   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:08:36.855848   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:08:36.870692   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:08:36.883541   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:08:36.883596   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:08:36.896118   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:08:36.908920   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:08:37.025056   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:08:37.187310   27912 docker.go:233] disabling docker service ...
	I1204 20:08:37.187365   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:08:37.200934   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:08:37.212871   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:08:37.332646   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:08:37.440309   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:08:37.453353   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:08:37.470970   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:08:37.471030   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.480927   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:08:37.481009   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.491149   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.500802   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.510374   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:08:37.520079   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.529955   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.545993   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.555622   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:08:37.564180   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:08:37.564228   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:08:37.576296   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:08:37.585144   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:37.693931   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:08:37.777449   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:08:37.777509   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:08:37.781553   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:08:37.781604   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:08:37.784811   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:08:37.822634   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:08:37.822702   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.848190   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.873431   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:08:37.874606   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:37.877259   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877590   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:37.877619   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877786   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:08:37.881175   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:37.892903   27912 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:08:37.892996   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:37.893068   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:37.926070   27912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 20:08:37.926123   27912 ssh_runner.go:195] Run: which lz4
	I1204 20:08:37.929507   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 20:08:37.929636   27912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:08:37.933391   27912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:08:37.933415   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 20:08:39.139354   27912 crio.go:462] duration metric: took 1.209791733s to copy over tarball
	I1204 20:08:39.139460   27912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:08:41.096167   27912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.956678939s)
	I1204 20:08:41.096191   27912 crio.go:469] duration metric: took 1.956790325s to extract the tarball
	I1204 20:08:41.096199   27912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:08:41.132019   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:41.174932   27912 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:08:41.174955   27912 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:08:41.174962   27912 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:08:41.175056   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:08:41.175118   27912 ssh_runner.go:195] Run: crio config
	I1204 20:08:41.217894   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:41.217917   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:41.217927   27912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:08:41.217952   27912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:08:41.218081   27912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:08:41.218111   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:08:41.218165   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:08:41.233083   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:08:41.233174   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:08:41.233229   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:08:41.242410   27912 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:08:41.242479   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:08:41.251172   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:08:41.266346   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:08:41.281669   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:08:41.296753   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 20:08:41.311501   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:08:41.314975   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:41.325862   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:41.458198   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:08:41.473798   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:08:41.473814   27912 certs.go:194] generating shared ca certs ...
	I1204 20:08:41.473829   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.473951   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:08:41.473998   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:08:41.474012   27912 certs.go:256] generating profile certs ...
	I1204 20:08:41.474071   27912 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:08:41.474104   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt with IP's: []
	I1204 20:08:41.679553   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt ...
	I1204 20:08:41.679577   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt: {Name:mk3cb32626a63b25e9bcb53dbf57982e8c59176a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679756   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key ...
	I1204 20:08:41.679770   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key: {Name:mk5952f9a719bbb3868bb675769b7b60346c6fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679866   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395
	I1204 20:08:41.679888   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1204 20:08:42.002083   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 ...
	I1204 20:08:42.002109   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395: {Name:mk5f9c87f1a9d17c216fb1ba76a871a4d200a2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002298   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 ...
	I1204 20:08:42.002314   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395: {Name:mkbc19c0135d212682268a777ef3380b2e19b0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002409   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:08:42.002519   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:08:42.002573   27912 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:08:42.002587   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt with IP's: []
	I1204 20:08:42.211018   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt ...
	I1204 20:08:42.211049   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt: {Name:mkf1a9add2f9343bc4f70a7fa70f135cc4d00f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211250   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key ...
	I1204 20:08:42.211265   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key: {Name:mkb8fc6229780db95a674383629b517d0cfa035d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211361   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:08:42.211400   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:08:42.211422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:08:42.211442   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:08:42.211459   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:08:42.211477   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:08:42.211491   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:08:42.211508   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:08:42.211575   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:08:42.211622   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:08:42.211635   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:08:42.211671   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:08:42.211703   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:08:42.211734   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:08:42.211789   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:42.211826   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.211847   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.211866   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.212397   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:08:42.248354   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:08:42.283210   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:08:42.315759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:08:42.337377   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:08:42.359236   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:08:42.380567   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:08:42.402068   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:08:42.423840   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:08:42.445088   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:08:42.466154   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:08:42.487261   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:08:42.502237   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:08:42.507399   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:08:42.517386   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521412   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521456   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.526682   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:08:42.536595   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:08:42.546422   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550778   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550834   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.556366   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:08:42.567110   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:08:42.577648   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581927   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581970   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.587418   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:08:42.598017   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:08:42.601905   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:08:42.601960   27912 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:42.602029   27912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:08:42.602067   27912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:08:42.638904   27912 cri.go:89] found id: ""
	I1204 20:08:42.638964   27912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:08:42.648459   27912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:08:42.657551   27912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:08:42.666519   27912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:08:42.666536   27912 kubeadm.go:157] found existing configuration files:
	
	I1204 20:08:42.666571   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:08:42.675036   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:08:42.675086   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:08:42.683928   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:08:42.692253   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:08:42.692304   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:08:42.701014   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.709166   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:08:42.709204   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.718070   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:08:42.726526   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:08:42.726584   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:08:42.735312   27912 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 20:08:42.947971   27912 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 20:08:54.006500   27912 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 20:08:54.006550   27912 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 20:08:54.006630   27912 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 20:08:54.006748   27912 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 20:08:54.006901   27912 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 20:08:54.006999   27912 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 20:08:54.008316   27912 out.go:235]   - Generating certificates and keys ...
	I1204 20:08:54.008397   27912 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 20:08:54.008459   27912 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 20:08:54.008548   27912 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 20:08:54.008635   27912 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 20:08:54.008695   27912 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 20:08:54.008737   27912 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 20:08:54.008784   27912 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 20:08:54.008879   27912 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.008924   27912 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 20:08:54.009023   27912 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.009133   27912 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 20:08:54.009245   27912 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 20:08:54.009321   27912 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 20:08:54.009403   27912 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 20:08:54.009487   27912 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 20:08:54.009570   27912 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 20:08:54.009644   27912 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 20:08:54.009733   27912 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 20:08:54.009810   27912 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 20:08:54.009903   27912 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 20:08:54.009962   27912 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 20:08:54.011358   27912 out.go:235]   - Booting up control plane ...
	I1204 20:08:54.011484   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 20:08:54.011569   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 20:08:54.011635   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 20:08:54.011728   27912 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 20:08:54.011808   27912 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 20:08:54.011842   27912 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 20:08:54.011948   27912 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 20:08:54.012038   27912 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 20:08:54.012094   27912 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001462808s
	I1204 20:08:54.012172   27912 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 20:08:54.012262   27912 kubeadm.go:310] [api-check] The API server is healthy after 6.02019816s
	I1204 20:08:54.012392   27912 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 20:08:54.012536   27912 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 20:08:54.012619   27912 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 20:08:54.012799   27912 kubeadm.go:310] [mark-control-plane] Marking the node ha-739930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 20:08:54.012886   27912 kubeadm.go:310] [bootstrap-token] Using token: borrl1.p9d68mzgpldkynyz
	I1204 20:08:54.013953   27912 out.go:235]   - Configuring RBAC rules ...
	I1204 20:08:54.014046   27912 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 20:08:54.014140   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 20:08:54.014307   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 20:08:54.014473   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 20:08:54.014571   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 20:08:54.014670   27912 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 20:08:54.014826   27912 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 20:08:54.014865   27912 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 20:08:54.014923   27912 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 20:08:54.014933   27912 kubeadm.go:310] 
	I1204 20:08:54.015010   27912 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 20:08:54.015019   27912 kubeadm.go:310] 
	I1204 20:08:54.015144   27912 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 20:08:54.015156   27912 kubeadm.go:310] 
	I1204 20:08:54.015195   27912 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 20:08:54.015270   27912 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 20:08:54.015320   27912 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 20:08:54.015326   27912 kubeadm.go:310] 
	I1204 20:08:54.015392   27912 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 20:08:54.015402   27912 kubeadm.go:310] 
	I1204 20:08:54.015442   27912 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 20:08:54.015451   27912 kubeadm.go:310] 
	I1204 20:08:54.015493   27912 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 20:08:54.015582   27912 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 20:08:54.015675   27912 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 20:08:54.015684   27912 kubeadm.go:310] 
	I1204 20:08:54.015786   27912 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 20:08:54.015895   27912 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 20:08:54.015905   27912 kubeadm.go:310] 
	I1204 20:08:54.016003   27912 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016093   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 20:08:54.016113   27912 kubeadm.go:310] 	--control-plane 
	I1204 20:08:54.016117   27912 kubeadm.go:310] 
	I1204 20:08:54.016205   27912 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 20:08:54.016217   27912 kubeadm.go:310] 
	I1204 20:08:54.016293   27912 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016397   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 20:08:54.016411   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:54.016416   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:54.017939   27912 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 20:08:54.019064   27912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 20:08:54.023950   27912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 20:08:54.023967   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 20:08:54.041186   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 20:08:54.359013   27912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 20:08:54.359083   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:54.359121   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930 minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=true
	I1204 20:08:54.395990   27912 ops.go:34] apiserver oom_adj: -16
	I1204 20:08:54.548524   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.049558   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.548661   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.048619   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.549070   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.048848   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.549554   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.048830   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.161390   27912 kubeadm.go:1113] duration metric: took 3.80235484s to wait for elevateKubeSystemPrivileges
	I1204 20:08:58.161423   27912 kubeadm.go:394] duration metric: took 15.559467425s to StartCluster
	I1204 20:08:58.161444   27912 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.161514   27912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.162310   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.162533   27912 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:58.162562   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:08:58.162544   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 20:08:58.162557   27912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 20:08:58.162652   27912 addons.go:69] Setting storage-provisioner=true in profile "ha-739930"
	I1204 20:08:58.162661   27912 addons.go:69] Setting default-storageclass=true in profile "ha-739930"
	I1204 20:08:58.162674   27912 addons.go:234] Setting addon storage-provisioner=true in "ha-739930"
	I1204 20:08:58.162693   27912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-739930"
	I1204 20:08:58.162706   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.162718   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:58.163133   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163137   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163158   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.163161   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.177830   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I1204 20:08:58.177986   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I1204 20:08:58.178299   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178427   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178779   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.178807   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.178981   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.179001   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.179143   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179321   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179506   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.179650   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.179676   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.181633   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.181895   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:08:58.182308   27912 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 20:08:58.182493   27912 addons.go:234] Setting addon default-storageclass=true in "ha-739930"
	I1204 20:08:58.182532   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.182790   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.182824   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.194517   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I1204 20:08:58.194972   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.195484   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.195512   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.195872   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.196070   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.197298   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1204 20:08:58.197610   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.197777   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.198114   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.198138   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.198429   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.198834   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.198862   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.199309   27912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:08:58.200430   27912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.200452   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 20:08:58.200469   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.203367   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203781   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.203808   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203943   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.204099   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.204233   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.204358   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.213101   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 20:08:58.213504   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.214031   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.214059   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.214380   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.214549   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.216016   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.216199   27912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.216211   27912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 20:08:58.216223   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.218960   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219280   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.219317   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219479   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.219661   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.219835   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.219997   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.277316   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 20:08:58.357820   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.374108   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.721001   27912 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 20:08:59.051895   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051921   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.051951   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051972   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052204   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052222   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052231   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052241   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052293   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052317   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052325   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052322   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.052332   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052462   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052473   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053776   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.053794   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.053805   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053870   27912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 20:08:59.053894   27912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 20:08:59.053992   27912 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 20:08:59.054003   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.054010   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.054014   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.064602   27912 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 20:08:59.065317   27912 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 20:08:59.065335   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.065347   27912 round_trippers.go:473]     Content-Type: application/json
	I1204 20:08:59.065354   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.065359   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.068638   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:08:59.068754   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.068772   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.068971   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.068989   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.069005   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.071139   27912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 20:08:59.072109   27912 addons.go:510] duration metric: took 909.550558ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 20:08:59.072142   27912 start.go:246] waiting for cluster config update ...
	I1204 20:08:59.072151   27912 start.go:255] writing updated cluster config ...
	I1204 20:08:59.073463   27912 out.go:201] 
	I1204 20:08:59.074725   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:59.074813   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.076300   27912 out.go:177] * Starting "ha-739930-m02" control-plane node in "ha-739930" cluster
	I1204 20:08:59.077339   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:59.077359   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:59.077447   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:59.077461   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:59.077541   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.077723   27912 start.go:360] acquireMachinesLock for ha-739930-m02: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:59.077776   27912 start.go:364] duration metric: took 30.982µs to acquireMachinesLock for "ha-739930-m02"
	I1204 20:08:59.077798   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:59.077880   27912 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 20:08:59.079261   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:59.079340   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:59.079368   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:59.093684   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1204 20:08:59.094078   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:59.094558   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:59.094579   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:59.094913   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:59.095089   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:08:59.095236   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:08:59.095406   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:59.095437   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:59.095465   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:59.095493   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095505   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095551   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:59.095568   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095579   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095595   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:59.095602   27912 main.go:141] libmachine: (ha-739930-m02) Calling .PreCreateCheck
	I1204 20:08:59.095756   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:08:59.096074   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:59.096086   27912 main.go:141] libmachine: (ha-739930-m02) Calling .Create
	I1204 20:08:59.096214   27912 main.go:141] libmachine: (ha-739930-m02) Creating KVM machine...
	I1204 20:08:59.097249   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing default KVM network
	I1204 20:08:59.097426   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing private KVM network mk-ha-739930
	I1204 20:08:59.097515   27912 main.go:141] libmachine: (ha-739930-m02) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.097549   27912 main.go:141] libmachine: (ha-739930-m02) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:59.097603   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.097507   28291 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.097713   27912 main.go:141] libmachine: (ha-739930-m02) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:59.334730   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.334621   28291 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa...
	I1204 20:08:59.653553   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653411   28291 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk...
	I1204 20:08:59.653587   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing magic tar header
	I1204 20:08:59.653647   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing SSH key tar header
	I1204 20:08:59.653678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653561   28291 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.653704   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 (perms=drwx------)
	I1204 20:08:59.653726   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02
	I1204 20:08:59.653737   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:59.653758   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:59.653773   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:59.653785   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.653796   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:59.653813   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:59.653825   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:59.653838   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:59.653850   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:08:59.653865   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:59.653875   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:59.653889   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home
	I1204 20:08:59.653903   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Skipping /home - not owner
	I1204 20:08:59.654725   27912 main.go:141] libmachine: (ha-739930-m02) define libvirt domain using xml: 
	I1204 20:08:59.654740   27912 main.go:141] libmachine: (ha-739930-m02) <domain type='kvm'>
	I1204 20:08:59.654751   27912 main.go:141] libmachine: (ha-739930-m02)   <name>ha-739930-m02</name>
	I1204 20:08:59.654763   27912 main.go:141] libmachine: (ha-739930-m02)   <memory unit='MiB'>2200</memory>
	I1204 20:08:59.654775   27912 main.go:141] libmachine: (ha-739930-m02)   <vcpu>2</vcpu>
	I1204 20:08:59.654788   27912 main.go:141] libmachine: (ha-739930-m02)   <features>
	I1204 20:08:59.654796   27912 main.go:141] libmachine: (ha-739930-m02)     <acpi/>
	I1204 20:08:59.654806   27912 main.go:141] libmachine: (ha-739930-m02)     <apic/>
	I1204 20:08:59.654818   27912 main.go:141] libmachine: (ha-739930-m02)     <pae/>
	I1204 20:08:59.654837   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.654847   27912 main.go:141] libmachine: (ha-739930-m02)   </features>
	I1204 20:08:59.654851   27912 main.go:141] libmachine: (ha-739930-m02)   <cpu mode='host-passthrough'>
	I1204 20:08:59.654858   27912 main.go:141] libmachine: (ha-739930-m02)   
	I1204 20:08:59.654862   27912 main.go:141] libmachine: (ha-739930-m02)   </cpu>
	I1204 20:08:59.654870   27912 main.go:141] libmachine: (ha-739930-m02)   <os>
	I1204 20:08:59.654874   27912 main.go:141] libmachine: (ha-739930-m02)     <type>hvm</type>
	I1204 20:08:59.654882   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='cdrom'/>
	I1204 20:08:59.654892   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='hd'/>
	I1204 20:08:59.654905   27912 main.go:141] libmachine: (ha-739930-m02)     <bootmenu enable='no'/>
	I1204 20:08:59.654916   27912 main.go:141] libmachine: (ha-739930-m02)   </os>
	I1204 20:08:59.654941   27912 main.go:141] libmachine: (ha-739930-m02)   <devices>
	I1204 20:08:59.654966   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='cdrom'>
	I1204 20:08:59.654982   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/boot2docker.iso'/>
	I1204 20:08:59.654997   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:59.655013   27912 main.go:141] libmachine: (ha-739930-m02)       <readonly/>
	I1204 20:08:59.655023   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655035   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='disk'>
	I1204 20:08:59.655049   27912 main.go:141] libmachine: (ha-739930-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:59.655067   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk'/>
	I1204 20:08:59.655083   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hda' bus='virtio'/>
	I1204 20:08:59.655095   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655104   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655117   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='mk-ha-739930'/>
	I1204 20:08:59.655129   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655141   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655157   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655176   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='default'/>
	I1204 20:08:59.655187   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655199   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655208   27912 main.go:141] libmachine: (ha-739930-m02)     <serial type='pty'>
	I1204 20:08:59.655231   27912 main.go:141] libmachine: (ha-739930-m02)       <target port='0'/>
	I1204 20:08:59.655250   27912 main.go:141] libmachine: (ha-739930-m02)     </serial>
	I1204 20:08:59.655268   27912 main.go:141] libmachine: (ha-739930-m02)     <console type='pty'>
	I1204 20:08:59.655284   27912 main.go:141] libmachine: (ha-739930-m02)       <target type='serial' port='0'/>
	I1204 20:08:59.655295   27912 main.go:141] libmachine: (ha-739930-m02)     </console>
	I1204 20:08:59.655302   27912 main.go:141] libmachine: (ha-739930-m02)     <rng model='virtio'>
	I1204 20:08:59.655315   27912 main.go:141] libmachine: (ha-739930-m02)       <backend model='random'>/dev/random</backend>
	I1204 20:08:59.655321   27912 main.go:141] libmachine: (ha-739930-m02)     </rng>
	I1204 20:08:59.655329   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655333   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655340   27912 main.go:141] libmachine: (ha-739930-m02)   </devices>
	I1204 20:08:59.655345   27912 main.go:141] libmachine: (ha-739930-m02) </domain>
	I1204 20:08:59.655362   27912 main.go:141] libmachine: (ha-739930-m02) 
	I1204 20:08:59.661230   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:69:55:bb in network default
	I1204 20:08:59.661784   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:08:59.661806   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring networks are active...
	I1204 20:08:59.662333   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network default is active
	I1204 20:08:59.662568   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network mk-ha-739930 is active
	I1204 20:08:59.662825   27912 main.go:141] libmachine: (ha-739930-m02) Getting domain xml...
	I1204 20:08:59.663438   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:09:00.864454   27912 main.go:141] libmachine: (ha-739930-m02) Waiting to get IP...
	I1204 20:09:00.865262   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:00.865678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:00.865706   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:00.865644   28291 retry.go:31] will retry after 202.440812ms: waiting for machine to come up
	I1204 20:09:01.070038   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.070521   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.070539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.070483   28291 retry.go:31] will retry after 379.96661ms: waiting for machine to come up
	I1204 20:09:01.452279   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.452670   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.452703   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.452620   28291 retry.go:31] will retry after 448.23669ms: waiting for machine to come up
	I1204 20:09:01.902848   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.903274   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.903301   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.903230   28291 retry.go:31] will retry after 590.399252ms: waiting for machine to come up
	I1204 20:09:02.495129   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:02.495572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:02.495602   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:02.495522   28291 retry.go:31] will retry after 535.882434ms: waiting for machine to come up
	I1204 20:09:03.033125   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.033552   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.033572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.033531   28291 retry.go:31] will retry after 698.598885ms: waiting for machine to come up
	I1204 20:09:03.733894   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.734321   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.734351   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.734276   28291 retry.go:31] will retry after 1.177854854s: waiting for machine to come up
	I1204 20:09:04.914541   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:04.914975   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:04.915005   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:04.914934   28291 retry.go:31] will retry after 1.093246259s: waiting for machine to come up
	I1204 20:09:06.010091   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:06.010517   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:06.010543   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:06.010478   28291 retry.go:31] will retry after 1.613080477s: waiting for machine to come up
	I1204 20:09:07.624874   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:07.625335   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:07.625364   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:07.625313   28291 retry.go:31] will retry after 2.249296346s: waiting for machine to come up
	I1204 20:09:09.875662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:09.876187   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:09.876218   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:09.876124   28291 retry.go:31] will retry after 2.42642151s: waiting for machine to come up
	I1204 20:09:12.305633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:12.306060   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:12.306085   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:12.306030   28291 retry.go:31] will retry after 2.221078432s: waiting for machine to come up
	I1204 20:09:14.529048   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:14.529558   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:14.529585   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:14.529522   28291 retry.go:31] will retry after 2.966790247s: waiting for machine to come up
	I1204 20:09:17.499601   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:17.500108   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:17.500137   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:17.500054   28291 retry.go:31] will retry after 4.394406199s: waiting for machine to come up
	I1204 20:09:21.898072   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898515   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898531   27912 main.go:141] libmachine: (ha-739930-m02) Found IP for machine: 192.168.39.216
	I1204 20:09:21.898543   27912 main.go:141] libmachine: (ha-739930-m02) Reserving static IP address...
	I1204 20:09:21.899016   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find host DHCP lease matching {name: "ha-739930-m02", mac: "52:54:00:91:b2:c1", ip: "192.168.39.216"} in network mk-ha-739930
	I1204 20:09:21.970499   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Getting to WaitForSSH function...
	I1204 20:09:21.970531   27912 main.go:141] libmachine: (ha-739930-m02) Reserved static IP address: 192.168.39.216
	I1204 20:09:21.970544   27912 main.go:141] libmachine: (ha-739930-m02) Waiting for SSH to be available...
	I1204 20:09:21.972885   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973270   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:21.973299   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973444   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH client type: external
	I1204 20:09:21.973472   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa (-rw-------)
	I1204 20:09:21.973507   27912 main.go:141] libmachine: (ha-739930-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:09:21.973526   27912 main.go:141] libmachine: (ha-739930-m02) DBG | About to run SSH command:
	I1204 20:09:21.973534   27912 main.go:141] libmachine: (ha-739930-m02) DBG | exit 0
	I1204 20:09:22.099805   27912 main.go:141] libmachine: (ha-739930-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 20:09:22.100058   27912 main.go:141] libmachine: (ha-739930-m02) KVM machine creation complete!
	I1204 20:09:22.100415   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:22.101293   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101487   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101644   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:09:22.101669   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetState
	I1204 20:09:22.102974   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:09:22.102992   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:09:22.103000   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:09:22.103008   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.105264   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105562   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.105595   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105759   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.105924   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106031   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106146   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.106307   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.106556   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.106582   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:09:22.210652   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.210674   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:09:22.210689   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.213316   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.213662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.213923   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214102   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214252   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.214405   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.214561   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.214571   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:09:22.320078   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:09:22.320145   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:09:22.320155   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:09:22.320176   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320420   27912 buildroot.go:166] provisioning hostname "ha-739930-m02"
	I1204 20:09:22.320451   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320599   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.322962   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323306   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.323331   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323525   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.323704   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323837   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323937   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.324095   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.324248   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.324260   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m02 && echo "ha-739930-m02" | sudo tee /etc/hostname
	I1204 20:09:22.442684   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m02
	
	I1204 20:09:22.442712   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.445503   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.445841   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.445866   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.446028   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.446227   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446390   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446547   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.446707   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.446886   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.446908   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:09:22.560132   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.560177   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:09:22.560210   27912 buildroot.go:174] setting up certificates
	I1204 20:09:22.560227   27912 provision.go:84] configureAuth start
	I1204 20:09:22.560246   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.560519   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:22.563054   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563443   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.563470   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563600   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.565613   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.565936   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.565961   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.566074   27912 provision.go:143] copyHostCerts
	I1204 20:09:22.566103   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566138   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:09:22.566151   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566226   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:09:22.566301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566318   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:09:22.566325   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566349   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:09:22.566391   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566409   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:09:22.566415   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566442   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:09:22.566488   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m02 san=[127.0.0.1 192.168.39.216 ha-739930-m02 localhost minikube]
	I1204 20:09:22.637792   27912 provision.go:177] copyRemoteCerts
	I1204 20:09:22.637844   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:09:22.637865   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.640451   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.640844   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.640870   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.641017   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.641198   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.641358   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.641490   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:22.721358   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:09:22.721454   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:09:22.745038   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:09:22.745117   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:09:22.767198   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:09:22.767272   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:09:22.788710   27912 provision.go:87] duration metric: took 228.465669ms to configureAuth
	I1204 20:09:22.788740   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:09:22.788919   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:22.788987   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.791733   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792076   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.792099   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792317   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.792506   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792661   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.792909   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.793086   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.793106   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:09:23.010014   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:09:23.010040   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:09:23.010051   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetURL
	I1204 20:09:23.011214   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using libvirt version 6000000
	I1204 20:09:23.013200   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.013554   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013737   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:09:23.013756   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:09:23.013764   27912 client.go:171] duration metric: took 23.918317311s to LocalClient.Create
	I1204 20:09:23.013791   27912 start.go:167] duration metric: took 23.918385611s to libmachine.API.Create "ha-739930"
	I1204 20:09:23.013802   27912 start.go:293] postStartSetup for "ha-739930-m02" (driver="kvm2")
	I1204 20:09:23.013810   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:09:23.013826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.014037   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:09:23.014061   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.016336   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016674   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.016696   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.017001   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.017147   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.017302   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.098690   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:09:23.102672   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:09:23.102692   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:09:23.102751   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:09:23.102837   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:09:23.102850   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:09:23.102957   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:09:23.113316   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:23.137226   27912 start.go:296] duration metric: took 123.412538ms for postStartSetup
	I1204 20:09:23.137272   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:23.137827   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.140225   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140510   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.140539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140708   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:09:23.140912   27912 start.go:128] duration metric: took 24.063021139s to createHost
	I1204 20:09:23.140935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.143463   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143769   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.143788   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.144107   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144264   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144405   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.144585   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:23.144731   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:23.144740   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:09:23.251984   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342963.229753214
	
	I1204 20:09:23.252009   27912 fix.go:216] guest clock: 1733342963.229753214
	I1204 20:09:23.252019   27912 fix.go:229] Guest: 2024-12-04 20:09:23.229753214 +0000 UTC Remote: 2024-12-04 20:09:23.140925676 +0000 UTC m=+71.238297049 (delta=88.827538ms)
	I1204 20:09:23.252039   27912 fix.go:200] guest clock delta is within tolerance: 88.827538ms
	I1204 20:09:23.252046   27912 start.go:83] releasing machines lock for "ha-739930-m02", held for 24.174259167s
	I1204 20:09:23.252070   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.252303   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.254849   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.255234   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.255263   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.257539   27912 out.go:177] * Found network options:
	I1204 20:09:23.258745   27912 out.go:177]   - NO_PROXY=192.168.39.183
	W1204 20:09:23.259924   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.259962   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260454   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260610   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260694   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:09:23.260738   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	W1204 20:09:23.260771   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.260841   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:09:23.260863   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.263151   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263477   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.263505   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263671   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.263841   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.263988   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.263998   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.264025   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.264114   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.264181   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.264329   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.264459   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.264614   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.488607   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:09:23.493980   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:09:23.494034   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:09:23.509548   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:09:23.509575   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:09:23.509645   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:09:23.525800   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:09:23.539440   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:09:23.539502   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:09:23.552521   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:09:23.565606   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:09:23.684851   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:09:23.845149   27912 docker.go:233] disabling docker service ...
	I1204 20:09:23.845231   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:09:23.859120   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:09:23.871561   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:09:23.987397   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:09:24.126711   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:09:24.141506   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:09:24.159151   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:09:24.159228   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.170226   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:09:24.170291   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.182530   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.192731   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.202617   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:09:24.213736   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.224231   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.240767   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.251003   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:09:24.260142   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:09:24.260204   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:09:24.272434   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:09:24.282354   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:24.398398   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:09:24.487789   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:09:24.487861   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:09:24.492488   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:09:24.492560   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:09:24.496257   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:09:24.535274   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:09:24.535361   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.562604   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.590689   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:09:24.591986   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:09:24.593151   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:24.595599   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.595887   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:24.595916   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.596077   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:09:24.600001   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:24.611463   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:09:24.611643   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:24.611877   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.611903   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.627049   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1204 20:09:24.627459   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.627903   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.627928   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.628257   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.628473   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:09:24.629895   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:24.630233   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.630265   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.644758   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I1204 20:09:24.645209   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.645667   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.645685   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.645969   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.646125   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:24.646291   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.216
	I1204 20:09:24.646303   27912 certs.go:194] generating shared ca certs ...
	I1204 20:09:24.646316   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.646428   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:09:24.646465   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:09:24.646474   27912 certs.go:256] generating profile certs ...
	I1204 20:09:24.646544   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:09:24.646568   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e
	I1204 20:09:24.646583   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.254]
	I1204 20:09:24.766401   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e ...
	I1204 20:09:24.766431   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e: {Name:mkc714ddc3cd4c136e7a763dd7561d567af3f099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766597   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e ...
	I1204 20:09:24.766610   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e: {Name:mk0a2c7e9c0190313579e96374b5ec6b927ba043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766678   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:09:24.766802   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:09:24.766921   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:09:24.766936   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:09:24.766949   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:09:24.766968   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:09:24.766979   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:09:24.766989   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:09:24.767002   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:09:24.767010   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:09:24.767022   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:09:24.767067   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:09:24.767093   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:09:24.767102   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:09:24.767122   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:09:24.767144   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:09:24.767164   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:09:24.767200   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:24.767225   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:24.767238   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:09:24.767250   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:09:24.767278   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:24.770180   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770542   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:24.770570   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770712   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:24.770891   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:24.771044   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:24.771172   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:24.847687   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:09:24.853685   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:09:24.865057   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:09:24.869198   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:09:24.885878   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:09:24.889805   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:09:24.902654   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:09:24.906786   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:09:24.918187   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:09:24.922192   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:09:24.934730   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:09:24.938712   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:09:24.950279   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:09:24.974079   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:09:24.996598   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:09:25.018605   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:09:25.040436   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 20:09:25.062496   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:09:25.083915   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:09:25.105243   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:09:25.126515   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:09:25.148104   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:09:25.169580   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:09:25.190929   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:09:25.206338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:09:25.221317   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:09:25.236210   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:09:25.251125   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:09:25.266383   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:09:25.281338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:09:25.296542   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:09:25.302513   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:09:25.313596   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317903   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317952   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.323324   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:09:25.334576   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:09:25.344350   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348476   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348531   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.353851   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:09:25.364310   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:09:25.375701   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379775   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379825   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.385241   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:09:25.395365   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:09:25.399560   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:09:25.399615   27912 kubeadm.go:934] updating node {m02 192.168.39.216 8443 v1.31.2 crio true true} ...
	I1204 20:09:25.399711   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:09:25.399742   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:09:25.399777   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:09:25.415868   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:09:25.415924   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:09:25.415967   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.424465   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:09:25.424517   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.433122   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:09:25.433145   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433195   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433218   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 20:09:25.433242   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 20:09:25.437081   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:09:25.437107   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:09:26.186226   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.186313   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.190746   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:09:26.190822   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:09:26.419618   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:09:26.443488   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.443611   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.450947   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:09:26.450982   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:09:26.739349   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:09:26.748265   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:09:26.764007   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:09:26.780904   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:09:26.797527   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:09:26.801091   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:26.811509   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:26.923723   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:26.939490   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:26.939813   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:26.939861   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:26.954842   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I1204 20:09:26.955355   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:26.955871   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:26.955897   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:26.956236   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:26.956453   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:26.956610   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:09:26.956705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:09:26.956726   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:26.959547   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.959914   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:26.959939   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.960071   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:26.960221   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:26.960358   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:26.960492   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:27.110244   27912 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:27.110295   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I1204 20:09:48.018604   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (20.908287309s)
	I1204 20:09:48.018634   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:09:48.626365   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m02 minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:09:48.747614   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:09:48.847766   27912 start.go:319] duration metric: took 21.891152638s to joinCluster
	I1204 20:09:48.847828   27912 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:48.848176   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:48.849095   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:09:48.850328   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:49.112006   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:49.157177   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:09:49.157538   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:09:49.157630   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:09:49.157883   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:09:49.158009   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.158021   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.158035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.158045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.168058   27912 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1204 20:09:49.658898   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.658922   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.658932   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.658943   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.667464   27912 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 20:09:50.158380   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.158399   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.158413   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.158419   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.171364   27912 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1204 20:09:50.658199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.658226   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.658233   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.658237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.663401   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:09:51.159112   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.159137   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.159148   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.159156   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.162480   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:51.163075   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:51.658265   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.658294   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.658304   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.658310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.661298   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:52.158591   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.158614   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.158623   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.158627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.161933   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:52.658479   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.658500   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.658508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.658513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.661537   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.158361   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.158384   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.158394   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.158402   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.161578   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.658404   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.658425   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.658433   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.658437   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.661364   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:53.662003   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:54.158610   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.158635   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.158645   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.158651   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.162217   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:54.658074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.658094   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.658102   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.658106   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.661918   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.158589   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.158611   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.158619   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.158624   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.161786   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.658906   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.658929   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.658937   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.658941   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.662357   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.663184   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:56.158490   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.158517   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.158528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.158533   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.258326   27912 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I1204 20:09:56.658232   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.658254   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.658264   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.658270   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.661245   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:57.158358   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.158380   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.158388   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.158392   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.162043   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:57.658188   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.658212   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.658223   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.658232   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.661717   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.158679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.158701   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.158708   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.158713   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.162634   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.163161   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:58.658856   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.658882   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.658900   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.658907   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.662596   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.158835   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.158862   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.158873   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.158880   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.162669   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.658183   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.658215   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.658226   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.658231   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.661879   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.158851   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.158875   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.158883   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.158888   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.162790   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.163321   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:00.658562   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.658590   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.658601   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.658607   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.676721   27912 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 20:10:01.159007   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.159027   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.159035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.159038   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.162909   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:01.658124   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.658161   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.658184   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.658188   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.662301   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:02.158692   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.158716   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.158727   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.158732   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.162067   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:02.659042   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.659064   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.659071   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.659075   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.661911   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:02.662581   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:03.159115   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.159145   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.159158   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.159165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.162607   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:03.658246   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.658270   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.658278   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.658282   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.661511   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.158942   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.158970   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.158979   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.158983   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.161958   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:04.658955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.658979   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.658987   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.658991   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.662295   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.662958   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:05.158173   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.158194   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.158203   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.158207   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.161194   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:05.658134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.658157   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.658165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.658168   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.661616   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:06.158855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.158879   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.158887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.158891   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.164708   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:06.658461   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.658483   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.658491   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.658496   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.661810   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.158647   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.158674   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.158686   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.158690   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.161793   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.162345   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:07.658727   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.658752   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.658760   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.658764   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.661982   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.158999   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.159025   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.159037   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.159043   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.162388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.162849   27912 node_ready.go:49] node "ha-739930-m02" has status "Ready":"True"
	I1204 20:10:08.162868   27912 node_ready.go:38] duration metric: took 19.004941155s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:10:08.162878   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:08.162968   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:08.162977   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.162984   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.162987   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.167331   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:08.173856   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.173935   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:10:08.173944   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.173953   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.173958   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.176715   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.177374   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.177387   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.177395   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.177400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.179818   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.180446   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.180466   27912 pod_ready.go:82] duration metric: took 6.589083ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180478   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180546   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:10:08.180556   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.180569   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.180577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.183177   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.183821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.183836   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.183842   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.183847   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.186093   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.186600   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.186617   27912 pod_ready.go:82] duration metric: took 6.131706ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186628   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186691   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:10:08.186703   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.186713   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.186721   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.188940   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.189382   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.189398   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.189414   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.189420   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191367   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.191803   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.191818   27912 pod_ready.go:82] duration metric: took 5.18298ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191825   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191870   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:10:08.191877   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.191884   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.193844   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.194287   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.194299   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.194306   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.194310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.196400   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.196781   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.196797   27912 pod_ready.go:82] duration metric: took 4.966669ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.196810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.359125   27912 request.go:632] Waited for 162.263796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359211   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359219   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.359230   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.359237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.362569   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.559438   27912 request.go:632] Waited for 196.306856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559514   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559519   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.559526   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.559534   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.562128   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.562664   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.562679   27912 pod_ready.go:82] duration metric: took 365.86397ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.562689   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.759755   27912 request.go:632] Waited for 197.00165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759826   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.759834   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.759837   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.763106   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.959132   27912 request.go:632] Waited for 195.283542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959204   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.959212   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.959216   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.962369   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.962948   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.962965   27912 pod_ready.go:82] duration metric: took 400.270135ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.962974   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.159437   27912 request.go:632] Waited for 196.391636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159487   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159492   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.159502   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.159507   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.162708   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.359960   27912 request.go:632] Waited for 196.36752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360010   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360014   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.360022   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.360026   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.362729   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:09.363473   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.363492   27912 pod_ready.go:82] duration metric: took 400.512945ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.363502   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.559607   27912 request.go:632] Waited for 196.045629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559663   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559668   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.559676   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.559683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.563302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.759860   27912 request.go:632] Waited for 195.862174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759930   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759935   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.759943   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.759949   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.762988   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.763689   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.763715   27912 pod_ready.go:82] duration metric: took 400.20496ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.763729   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.959738   27912 request.go:632] Waited for 195.93307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959807   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959812   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.959819   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.959824   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.963156   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.159198   27912 request.go:632] Waited for 195.305905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159270   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159275   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.159283   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.159286   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.162529   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.163056   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.163074   27912 pod_ready.go:82] duration metric: took 399.337655ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.163084   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.359093   27912 request.go:632] Waited for 195.949947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359150   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359172   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.359182   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.359192   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.362392   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.559558   27912 request.go:632] Waited for 196.399776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559639   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559653   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.559664   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.559670   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.564370   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:10.564877   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.564896   27912 pod_ready.go:82] duration metric: took 401.805669ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.564906   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.759943   27912 request.go:632] Waited for 194.973279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760013   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.760021   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.760027   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.763726   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.959656   27912 request.go:632] Waited for 195.375986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959714   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959719   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.959726   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.959731   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.963524   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.964360   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.964375   27912 pod_ready.go:82] duration metric: took 399.464088ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.964389   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.159456   27912 request.go:632] Waited for 194.987845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159527   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159532   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.159539   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.159543   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.163395   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.359362   27912 request.go:632] Waited for 195.347282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359439   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359446   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.359458   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.359467   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.362635   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.363122   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:11.363138   27912 pod_ready.go:82] duration metric: took 398.74121ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.363148   27912 pod_ready.go:39] duration metric: took 3.200239096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:11.363164   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:10:11.363207   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:10:11.377015   27912 api_server.go:72] duration metric: took 22.529160197s to wait for apiserver process to appear ...
	I1204 20:10:11.377034   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:10:11.377052   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:10:11.380929   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:10:11.380976   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:10:11.380983   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.380999   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.381003   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.381838   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:10:11.381917   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:10:11.381931   27912 api_server.go:131] duration metric: took 4.890825ms to wait for apiserver health ...
	I1204 20:10:11.381937   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:10:11.559327   27912 request.go:632] Waited for 177.330525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559495   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.559519   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.559528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.566679   27912 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 20:10:11.572558   27912 system_pods.go:59] 17 kube-system pods found
	I1204 20:10:11.572586   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.572592   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.572597   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.572600   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.572604   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.572607   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.572612   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.572617   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.572623   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.572628   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.572635   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.572641   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.572646   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.572651   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.572655   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.572658   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.572661   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.572670   27912 system_pods.go:74] duration metric: took 190.727819ms to wait for pod list to return data ...
	I1204 20:10:11.572678   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:10:11.759027   27912 request.go:632] Waited for 186.27116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759095   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759100   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.759108   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.759113   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.763664   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:11.763867   27912 default_sa.go:45] found service account: "default"
	I1204 20:10:11.763882   27912 default_sa.go:55] duration metric: took 191.195892ms for default service account to be created ...
	I1204 20:10:11.763890   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:10:11.959431   27912 request.go:632] Waited for 195.47766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959540   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959553   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.959560   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.959566   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.965051   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:11.970022   27912 system_pods.go:86] 17 kube-system pods found
	I1204 20:10:11.970046   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.970051   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.970055   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.970059   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.970067   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.970071   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.970074   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.970078   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.970082   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.970088   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.970091   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.970095   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.970098   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.970100   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.970103   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.970106   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.970114   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.970124   27912 system_pods.go:126] duration metric: took 206.228874ms to wait for k8s-apps to be running ...
	I1204 20:10:11.970130   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:10:11.970170   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:11.984252   27912 system_svc.go:56] duration metric: took 14.113655ms WaitForService to wait for kubelet
	I1204 20:10:11.984285   27912 kubeadm.go:582] duration metric: took 23.13642897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:10:11.984305   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:10:12.159992   27912 request.go:632] Waited for 175.622844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160081   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:12.160088   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:12.160092   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:12.163352   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:12.164036   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164057   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164070   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164075   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164081   27912 node_conditions.go:105] duration metric: took 179.770433ms to run NodePressure ...
	I1204 20:10:12.164096   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:10:12.164129   27912 start.go:255] writing updated cluster config ...
	I1204 20:10:12.166221   27912 out.go:201] 
	I1204 20:10:12.167682   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:12.167793   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.169433   27912 out.go:177] * Starting "ha-739930-m03" control-plane node in "ha-739930" cluster
	I1204 20:10:12.170619   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:10:12.170641   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:10:12.170743   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:10:12.170758   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:10:12.170867   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.171047   27912 start.go:360] acquireMachinesLock for ha-739930-m03: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:10:12.171095   27912 start.go:364] duration metric: took 28.989µs to acquireMachinesLock for "ha-739930-m03"
	I1204 20:10:12.171119   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:12.171232   27912 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 20:10:12.172689   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:10:12.172776   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:12.172819   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:12.188562   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1204 20:10:12.189008   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:12.189520   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:12.189541   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:12.189894   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:12.190074   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:12.190188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:12.190394   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:10:12.190426   27912 client.go:168] LocalClient.Create starting
	I1204 20:10:12.190471   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:10:12.190508   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190530   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190598   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:10:12.190629   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190652   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190679   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:10:12.190691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .PreCreateCheck
	I1204 20:10:12.190909   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:12.191309   27912 main.go:141] libmachine: Creating machine...
	I1204 20:10:12.191322   27912 main.go:141] libmachine: (ha-739930-m03) Calling .Create
	I1204 20:10:12.191476   27912 main.go:141] libmachine: (ha-739930-m03) Creating KVM machine...
	I1204 20:10:12.192652   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing default KVM network
	I1204 20:10:12.192779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing private KVM network mk-ha-739930
	I1204 20:10:12.192908   27912 main.go:141] libmachine: (ha-739930-m03) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.192934   27912 main.go:141] libmachine: (ha-739930-m03) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:10:12.192988   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.192887   28697 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.193089   27912 main.go:141] libmachine: (ha-739930-m03) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:10:12.422847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.422708   28697 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa...
	I1204 20:10:12.571024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.570898   28697 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk...
	I1204 20:10:12.571065   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing magic tar header
	I1204 20:10:12.571083   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing SSH key tar header
	I1204 20:10:12.571096   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.571045   28697 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.571246   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03
	I1204 20:10:12.571291   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 (perms=drwx------)
	I1204 20:10:12.571302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:10:12.571314   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.571323   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:10:12.571331   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:10:12.571339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:10:12.571346   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home
	I1204 20:10:12.571354   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Skipping /home - not owner
	I1204 20:10:12.571391   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:10:12.571415   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:10:12.571432   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:10:12.571447   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:10:12.571458   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:10:12.571477   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:12.572409   27912 main.go:141] libmachine: (ha-739930-m03) define libvirt domain using xml: 
	I1204 20:10:12.572438   27912 main.go:141] libmachine: (ha-739930-m03) <domain type='kvm'>
	I1204 20:10:12.572449   27912 main.go:141] libmachine: (ha-739930-m03)   <name>ha-739930-m03</name>
	I1204 20:10:12.572461   27912 main.go:141] libmachine: (ha-739930-m03)   <memory unit='MiB'>2200</memory>
	I1204 20:10:12.572474   27912 main.go:141] libmachine: (ha-739930-m03)   <vcpu>2</vcpu>
	I1204 20:10:12.572480   27912 main.go:141] libmachine: (ha-739930-m03)   <features>
	I1204 20:10:12.572490   27912 main.go:141] libmachine: (ha-739930-m03)     <acpi/>
	I1204 20:10:12.572496   27912 main.go:141] libmachine: (ha-739930-m03)     <apic/>
	I1204 20:10:12.572505   27912 main.go:141] libmachine: (ha-739930-m03)     <pae/>
	I1204 20:10:12.572511   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572522   27912 main.go:141] libmachine: (ha-739930-m03)   </features>
	I1204 20:10:12.572529   27912 main.go:141] libmachine: (ha-739930-m03)   <cpu mode='host-passthrough'>
	I1204 20:10:12.572539   27912 main.go:141] libmachine: (ha-739930-m03)   
	I1204 20:10:12.572549   27912 main.go:141] libmachine: (ha-739930-m03)   </cpu>
	I1204 20:10:12.572577   27912 main.go:141] libmachine: (ha-739930-m03)   <os>
	I1204 20:10:12.572599   27912 main.go:141] libmachine: (ha-739930-m03)     <type>hvm</type>
	I1204 20:10:12.572612   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='cdrom'/>
	I1204 20:10:12.572622   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='hd'/>
	I1204 20:10:12.572630   27912 main.go:141] libmachine: (ha-739930-m03)     <bootmenu enable='no'/>
	I1204 20:10:12.572640   27912 main.go:141] libmachine: (ha-739930-m03)   </os>
	I1204 20:10:12.572648   27912 main.go:141] libmachine: (ha-739930-m03)   <devices>
	I1204 20:10:12.572659   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='cdrom'>
	I1204 20:10:12.572673   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/boot2docker.iso'/>
	I1204 20:10:12.572688   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hdc' bus='scsi'/>
	I1204 20:10:12.572708   27912 main.go:141] libmachine: (ha-739930-m03)       <readonly/>
	I1204 20:10:12.572721   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572747   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='disk'>
	I1204 20:10:12.572758   27912 main.go:141] libmachine: (ha-739930-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:10:12.572766   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk'/>
	I1204 20:10:12.572780   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hda' bus='virtio'/>
	I1204 20:10:12.572788   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572792   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572798   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='mk-ha-739930'/>
	I1204 20:10:12.572802   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572807   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572814   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572819   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='default'/>
	I1204 20:10:12.572825   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572842   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572860   27912 main.go:141] libmachine: (ha-739930-m03)     <serial type='pty'>
	I1204 20:10:12.572872   27912 main.go:141] libmachine: (ha-739930-m03)       <target port='0'/>
	I1204 20:10:12.572883   27912 main.go:141] libmachine: (ha-739930-m03)     </serial>
	I1204 20:10:12.572904   27912 main.go:141] libmachine: (ha-739930-m03)     <console type='pty'>
	I1204 20:10:12.572914   27912 main.go:141] libmachine: (ha-739930-m03)       <target type='serial' port='0'/>
	I1204 20:10:12.572922   27912 main.go:141] libmachine: (ha-739930-m03)     </console>
	I1204 20:10:12.572932   27912 main.go:141] libmachine: (ha-739930-m03)     <rng model='virtio'>
	I1204 20:10:12.572945   27912 main.go:141] libmachine: (ha-739930-m03)       <backend model='random'>/dev/random</backend>
	I1204 20:10:12.572957   27912 main.go:141] libmachine: (ha-739930-m03)     </rng>
	I1204 20:10:12.572965   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572973   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572983   27912 main.go:141] libmachine: (ha-739930-m03)   </devices>
	I1204 20:10:12.572991   27912 main.go:141] libmachine: (ha-739930-m03) </domain>
	I1204 20:10:12.572996   27912 main.go:141] libmachine: (ha-739930-m03) 
	I1204 20:10:12.580033   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:71:b7:c8 in network default
	I1204 20:10:12.580713   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring networks are active...
	I1204 20:10:12.580737   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:12.581680   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network default is active
	I1204 20:10:12.582031   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network mk-ha-739930 is active
	I1204 20:10:12.582464   27912 main.go:141] libmachine: (ha-739930-m03) Getting domain xml...
	I1204 20:10:12.583287   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:13.809969   27912 main.go:141] libmachine: (ha-739930-m03) Waiting to get IP...
	I1204 20:10:13.810804   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:13.811158   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:13.811215   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:13.811149   28697 retry.go:31] will retry after 211.474142ms: waiting for machine to come up
	I1204 20:10:14.024550   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.024996   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.025024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.024958   28697 retry.go:31] will retry after 355.071975ms: waiting for machine to come up
	I1204 20:10:14.381391   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.381825   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.381857   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.381781   28697 retry.go:31] will retry after 319.974042ms: waiting for machine to come up
	I1204 20:10:14.703466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.703910   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.703951   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.703877   28697 retry.go:31] will retry after 609.562735ms: waiting for machine to come up
	I1204 20:10:15.314561   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.315069   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.315101   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.315013   28697 retry.go:31] will retry after 486.973077ms: waiting for machine to come up
	I1204 20:10:15.803653   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.804185   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.804213   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.804126   28697 retry.go:31] will retry after 675.766149ms: waiting for machine to come up
	I1204 20:10:16.481967   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:16.482459   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:16.482489   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:16.482406   28697 retry.go:31] will retry after 1.174103834s: waiting for machine to come up
	I1204 20:10:17.658189   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:17.658580   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:17.658608   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:17.658533   28697 retry.go:31] will retry after 1.454065165s: waiting for machine to come up
	I1204 20:10:19.114276   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:19.114810   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:19.114839   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:19.114726   28697 retry.go:31] will retry after 1.181631433s: waiting for machine to come up
	I1204 20:10:20.297423   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:20.297826   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:20.297856   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:20.297775   28697 retry.go:31] will retry after 1.797113318s: waiting for machine to come up
	I1204 20:10:22.096493   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:22.096936   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:22.096963   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:22.096891   28697 retry.go:31] will retry after 2.640330643s: waiting for machine to come up
	I1204 20:10:24.740014   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:24.740549   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:24.740589   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:24.740509   28697 retry.go:31] will retry after 3.427854139s: waiting for machine to come up
	I1204 20:10:28.170039   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:28.170450   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:28.170480   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:28.170413   28697 retry.go:31] will retry after 3.100818386s: waiting for machine to come up
	I1204 20:10:31.273778   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:31.274339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:31.274370   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:31.274261   28697 retry.go:31] will retry after 5.17411421s: waiting for machine to come up
	I1204 20:10:36.453055   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453514   27912 main.go:141] libmachine: (ha-739930-m03) Found IP for machine: 192.168.39.176
	I1204 20:10:36.453546   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has current primary IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453554   27912 main.go:141] libmachine: (ha-739930-m03) Reserving static IP address...
	I1204 20:10:36.453982   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find host DHCP lease matching {name: "ha-739930-m03", mac: "52:54:00:8f:55:42", ip: "192.168.39.176"} in network mk-ha-739930
	I1204 20:10:36.527779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Getting to WaitForSSH function...
	I1204 20:10:36.527812   27912 main.go:141] libmachine: (ha-739930-m03) Reserved static IP address: 192.168.39.176
	I1204 20:10:36.527825   27912 main.go:141] libmachine: (ha-739930-m03) Waiting for SSH to be available...
	I1204 20:10:36.530460   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.530890   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.530918   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.531105   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH client type: external
	I1204 20:10:36.531134   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa (-rw-------)
	I1204 20:10:36.531171   27912 main.go:141] libmachine: (ha-739930-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:10:36.531193   27912 main.go:141] libmachine: (ha-739930-m03) DBG | About to run SSH command:
	I1204 20:10:36.531210   27912 main.go:141] libmachine: (ha-739930-m03) DBG | exit 0
	I1204 20:10:36.659229   27912 main.go:141] libmachine: (ha-739930-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 20:10:36.659536   27912 main.go:141] libmachine: (ha-739930-m03) KVM machine creation complete!
	I1204 20:10:36.659863   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:36.660403   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660622   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660802   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:10:36.660816   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetState
	I1204 20:10:36.662148   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:10:36.662160   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:10:36.662181   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:10:36.662187   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.664336   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664681   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.664694   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664829   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.664988   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665140   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665284   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.665446   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.665639   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.665651   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:10:36.774558   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:36.774575   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:10:36.774582   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.777253   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777655   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.777682   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777862   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.778048   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778224   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778333   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.778478   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.778662   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.778673   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:10:36.891601   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:10:36.891668   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:10:36.891681   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:10:36.891691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.891891   27912 buildroot.go:166] provisioning hostname "ha-739930-m03"
	I1204 20:10:36.891918   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.892100   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.894477   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.894866   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.894903   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.895026   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.895181   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895327   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895457   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.895582   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.895780   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.895798   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m03 && echo "ha-739930-m03" | sudo tee /etc/hostname
	I1204 20:10:37.022149   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m03
	
	I1204 20:10:37.022188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.024859   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.025324   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025555   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.025739   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.025923   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.026044   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.026196   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.026355   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.026371   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:10:37.143730   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:37.143754   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:10:37.143777   27912 buildroot.go:174] setting up certificates
	I1204 20:10:37.143788   27912 provision.go:84] configureAuth start
	I1204 20:10:37.143795   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:37.144053   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:37.146742   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147064   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.147095   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147234   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.149352   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149692   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.149719   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149832   27912 provision.go:143] copyHostCerts
	I1204 20:10:37.149875   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.149914   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:10:37.149926   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.150010   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:10:37.150120   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150164   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:10:37.150175   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150216   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:10:37.150301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150325   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:10:37.150331   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150367   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:10:37.150468   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m03 san=[127.0.0.1 192.168.39.176 ha-739930-m03 localhost minikube]
	I1204 20:10:37.504595   27912 provision.go:177] copyRemoteCerts
	I1204 20:10:37.504652   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:10:37.504676   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.507572   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.507995   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.508023   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.508251   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.508469   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.508628   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.508752   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:37.592737   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:10:37.592815   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:10:37.614702   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:10:37.614759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:10:37.636793   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:10:37.636856   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 20:10:37.657514   27912 provision.go:87] duration metric: took 513.715697ms to configureAuth
	I1204 20:10:37.657537   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:10:37.657776   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:37.657846   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.660375   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660716   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.660743   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660915   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.661101   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661283   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661394   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.661530   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.661715   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.661731   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:10:37.909620   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:10:37.909653   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:10:37.909661   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetURL
	I1204 20:10:37.911012   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using libvirt version 6000000
	I1204 20:10:37.913430   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913836   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.913865   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913996   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:10:37.914009   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:10:37.914014   27912 client.go:171] duration metric: took 25.723578899s to LocalClient.Create
	I1204 20:10:37.914034   27912 start.go:167] duration metric: took 25.723643031s to libmachine.API.Create "ha-739930"
	I1204 20:10:37.914045   27912 start.go:293] postStartSetup for "ha-739930-m03" (driver="kvm2")
	I1204 20:10:37.914058   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:10:37.914082   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:37.914308   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:10:37.914329   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.916698   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917013   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.917037   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917163   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.917355   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.917507   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.917647   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.000720   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:10:38.004659   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:10:38.004677   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:10:38.004732   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:10:38.004797   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:10:38.004805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:10:38.004881   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:10:38.014138   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:38.035007   27912 start.go:296] duration metric: took 120.952939ms for postStartSetup
	I1204 20:10:38.035043   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:38.035625   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.038045   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038404   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.038431   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038707   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:38.038928   27912 start.go:128] duration metric: took 25.86768393s to createHost
	I1204 20:10:38.038955   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.040921   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041241   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.041260   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041384   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.041567   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041725   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041870   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.042033   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:38.042234   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:38.042247   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:10:38.147467   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343038.125898138
	
	I1204 20:10:38.147487   27912 fix.go:216] guest clock: 1733343038.125898138
	I1204 20:10:38.147494   27912 fix.go:229] Guest: 2024-12-04 20:10:38.125898138 +0000 UTC Remote: 2024-12-04 20:10:38.038942767 +0000 UTC m=+146.136314147 (delta=86.955371ms)
	I1204 20:10:38.147507   27912 fix.go:200] guest clock delta is within tolerance: 86.955371ms
	I1204 20:10:38.147511   27912 start.go:83] releasing machines lock for "ha-739930-m03", held for 25.976405222s
	I1204 20:10:38.147527   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.147758   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.150388   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.150780   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.150809   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.153038   27912 out.go:177] * Found network options:
	I1204 20:10:38.154623   27912 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.216
	W1204 20:10:38.155949   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.155970   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.155981   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156494   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156668   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156762   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:10:38.156817   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	W1204 20:10:38.156874   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.156896   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.156981   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:10:38.157003   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.159414   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159669   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159823   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.159847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159966   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160094   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.160122   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.160127   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160279   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160293   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160410   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.160424   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160525   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160650   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.394150   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:10:38.401145   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:10:38.401209   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:10:38.417195   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:10:38.417223   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:10:38.417296   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:10:38.435131   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:10:38.448563   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:10:38.448618   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:10:38.461725   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:10:38.474727   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:10:38.588798   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:10:38.745587   27912 docker.go:233] disabling docker service ...
	I1204 20:10:38.745653   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:10:38.759235   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:10:38.771608   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:10:38.877832   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:10:38.982502   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:10:38.995491   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:10:39.012043   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:10:39.012100   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.021299   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:10:39.021358   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.030541   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.039631   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.048551   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:10:39.058773   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.068061   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.083733   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.092600   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:10:39.101297   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:10:39.101340   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:10:39.113156   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:10:39.122303   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:39.227598   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:10:39.312250   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:10:39.312323   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:10:39.316600   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:10:39.316650   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:10:39.320258   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:10:39.357732   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:10:39.357795   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.390225   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.419008   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:10:39.420400   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:10:39.421790   27912 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.216
	I1204 20:10:39.423169   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:39.425979   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426437   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:39.426466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426672   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:10:39.431086   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:39.443488   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:10:39.443719   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:39.443987   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.444059   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.459062   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I1204 20:10:39.459454   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.459962   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.459982   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.460287   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.460468   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:10:39.462100   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:39.462434   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.462472   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.476580   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I1204 20:10:39.476947   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.477280   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.477302   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.477596   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.477759   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:39.477901   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.176
	I1204 20:10:39.477913   27912 certs.go:194] generating shared ca certs ...
	I1204 20:10:39.477926   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.478032   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:10:39.478067   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:10:39.478076   27912 certs.go:256] generating profile certs ...
	I1204 20:10:39.478140   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:10:39.478162   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8
	I1204 20:10:39.478183   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:10:39.647686   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 ...
	I1204 20:10:39.647712   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8: {Name:mka45902bb26beb0e72f217dc87741ab3309d928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.647887   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 ...
	I1204 20:10:39.647910   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8: {Name:mk0280d80935ba52cb98acc5d6236d25a3a3095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.648008   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:10:39.648187   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:10:39.648361   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:10:39.648383   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:10:39.648403   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:10:39.648422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:10:39.648440   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:10:39.648458   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:10:39.648475   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:10:39.648493   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:10:39.663476   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:10:39.663545   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:10:39.663584   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:10:39.663595   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:10:39.663616   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:10:39.663649   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:10:39.663681   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:10:39.663737   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:39.663769   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:39.663786   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:10:39.663805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:10:39.663843   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:39.666431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666764   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:39.666781   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666946   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:39.667122   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:39.667283   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:39.667442   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:39.739814   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:10:39.744522   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:10:39.755922   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:10:39.759927   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:10:39.770702   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:10:39.775183   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:10:39.787784   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:10:39.792674   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:10:39.805368   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:10:39.809503   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:10:39.828088   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:10:39.832824   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:10:39.844859   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:10:39.869334   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:10:39.893785   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:10:39.916818   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:10:39.939176   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 20:10:39.961163   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:10:39.983006   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:10:40.005681   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:10:40.028546   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:10:40.051809   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:10:40.074413   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:10:40.097808   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:10:40.113924   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:10:40.131147   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:10:40.149216   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:10:40.166655   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:10:40.182489   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:10:40.200001   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:10:40.221223   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:10:40.226405   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:10:40.235863   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239603   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239672   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.245186   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:10:40.256188   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:10:40.266724   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271086   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271119   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.276304   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:10:40.286222   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:10:40.297060   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301192   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301236   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.307282   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:10:40.317487   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:10:40.320982   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:10:40.321045   27912 kubeadm.go:934] updating node {m03 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1204 20:10:40.321144   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:10:40.321175   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:10:40.321208   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:10:40.335360   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:10:40.335431   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:10:40.335468   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.344356   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:10:40.344387   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.352481   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:10:40.352490   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 20:10:40.352500   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352520   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 20:10:40.352529   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:40.352538   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.352555   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352614   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.357211   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:10:40.357232   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:10:40.373861   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:10:40.373888   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:10:40.393917   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.394019   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.435438   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:10:40.435480   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:10:41.204864   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:10:41.214084   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:10:41.230130   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:10:41.245590   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:10:41.261184   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:10:41.264917   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:41.276834   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:41.407860   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:10:41.425834   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:41.426358   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:41.426432   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:41.444259   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I1204 20:10:41.444841   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:41.445793   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:41.445819   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:41.446152   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:41.446372   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:41.446554   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:10:41.446705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:10:41.446730   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:41.449938   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450354   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:41.450382   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450525   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:41.450704   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:41.450893   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:41.451051   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:41.603198   27912 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:41.603245   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443"
	I1204 20:11:02.285051   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443": (20.681780468s)
	I1204 20:11:02.285099   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:11:02.929343   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m03 minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:11:03.053541   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:11:03.177213   27912 start.go:319] duration metric: took 21.7306554s to joinCluster
	I1204 20:11:03.177299   27912 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:11:03.177647   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:11:03.178583   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:11:03.179869   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:11:03.436285   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:11:03.491544   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:11:03.491892   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:11:03.491978   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:11:03.492270   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:03.492369   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.492380   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.492391   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.492400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.496740   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:03.992695   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.992717   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.992725   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.992729   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.996010   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.493230   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.493265   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.493272   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.496716   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.992539   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.992561   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.992571   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.992577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.995936   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:05.493273   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.493300   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.493311   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.493317   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.497413   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:05.497897   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:05.993362   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.993385   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.993392   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.993397   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.996675   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.492587   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.492610   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.492620   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.492627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.495773   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.993310   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.993331   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.993339   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.993343   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.996864   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.492704   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.492741   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.492750   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.492754   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.496418   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.993375   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.993397   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.993404   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.993414   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.996601   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.997248   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:08.492707   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.492739   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.492752   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.492757   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.498736   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:08.992522   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.992546   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.992554   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.992559   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.996681   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:09.492442   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.492462   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.492470   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.492475   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.496143   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:09.992900   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.992932   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.992939   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.992944   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.996453   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.492481   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.492499   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.492507   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.492513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.496234   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.497174   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:10.992502   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.992525   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.992532   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.992553   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.995639   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.493014   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.493034   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.493042   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.493045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.496066   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.992460   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.992481   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.992488   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.992492   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.995782   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.492536   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.492559   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.492567   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.492575   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.496512   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.993486   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.993507   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.993515   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.993521   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.996929   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.997503   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:13.492705   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.492728   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.492735   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.492739   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.495958   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:13.993195   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.993235   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.993243   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.993248   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.996458   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:14.492667   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.492687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.492695   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.492700   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.496760   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:14.992634   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.992657   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.992665   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.992668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.996174   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.492623   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.492645   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.492651   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.492656   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.496189   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.496993   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:15.993412   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.993432   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.993438   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.993442   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.996343   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:16.492477   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.492500   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.492508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.492512   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.495796   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:16.993504   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.993533   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.993545   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.993552   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.996589   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.492614   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.492637   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.492649   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.492654   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.496032   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.992928   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.992951   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.992958   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.992961   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.996749   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.997385   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:18.492596   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.492617   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.492625   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.492629   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.495562   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:18.992579   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.992604   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.992612   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.992616   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.996070   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.493093   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.493113   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.493121   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.493126   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.992762   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.992788   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.992796   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.992802   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.996757   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.997645   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:20.493018   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.493038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.493045   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.493049   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.496165   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:20.993181   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.993203   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.993211   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.993214   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.996266   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.493006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.493035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.493044   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.493050   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.497703   27912 node_ready.go:49] node "ha-739930-m03" has status "Ready":"True"
	I1204 20:11:21.497723   27912 node_ready.go:38] duration metric: took 18.005431822s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:21.497731   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:21.497795   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:21.497804   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.497811   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.497815   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.504465   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:21.510955   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.511029   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:11:21.511038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.511050   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.511058   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.514034   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.514600   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.514614   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.514622   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.514627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.517241   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.517672   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.517688   27912 pod_ready.go:82] duration metric: took 6.709809ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517707   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517765   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:11:21.517772   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.517781   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.517791   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.520563   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.521278   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.521296   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.521307   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.521313   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.523869   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.524405   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.524426   27912 pod_ready.go:82] duration metric: took 6.708809ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524435   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524489   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:11:21.524498   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.524504   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.524510   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.526682   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.527365   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.527393   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.527401   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.527410   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530023   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.530721   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.530744   27912 pod_ready.go:82] duration metric: took 6.30261ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530758   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530832   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:11:21.530844   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.530856   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530866   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.533485   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.534074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:21.534089   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.534098   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.534104   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.536315   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.536771   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.536789   27912 pod_ready.go:82] duration metric: took 6.023339ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.536798   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.693086   27912 request.go:632] Waited for 156.229013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693178   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693187   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.693199   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.693211   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.696805   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.893066   27912 request.go:632] Waited for 195.292666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893122   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893140   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.893148   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.893151   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.896289   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.896776   27912 pod_ready.go:93] pod "etcd-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.896798   27912 pod_ready.go:82] duration metric: took 359.993172ms for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.896822   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.094080   27912 request.go:632] Waited for 197.155628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094159   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094178   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.094195   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.094201   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.097388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.293809   27912 request.go:632] Waited for 194.988533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293864   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293871   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.293881   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.293886   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.297036   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.297688   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.297708   27912 pod_ready.go:82] duration metric: took 400.873563ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.297721   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.493772   27912 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493834   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493840   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.493847   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.493850   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.497525   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.693745   27912 request.go:632] Waited for 195.318737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693830   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693837   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.693844   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.693849   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.697438   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.697941   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.697959   27912 pod_ready.go:82] duration metric: took 400.231011ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.697969   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.894031   27912 request.go:632] Waited for 195.997225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894100   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894105   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.894113   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.894119   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.896928   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.093056   27912 request.go:632] Waited for 195.290507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093109   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093116   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.093125   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.093131   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.096071   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.096675   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.096695   27912 pod_ready.go:82] duration metric: took 398.72057ms for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.096706   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.293761   27912 request.go:632] Waited for 196.979038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293857   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293863   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.293870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.293877   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.297313   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.493595   27912 request.go:632] Waited for 195.358893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493645   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493652   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.493662   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.493668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.496860   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.497431   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.497447   27912 pod_ready.go:82] duration metric: took 400.733171ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.497457   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.693609   27912 request.go:632] Waited for 196.087422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693665   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693670   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.693677   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.693681   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.697816   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:23.893073   27912 request.go:632] Waited for 194.284611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893157   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.893173   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.893179   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.896273   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.896905   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.896921   27912 pod_ready.go:82] duration metric: took 399.455915ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.896931   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.094047   27912 request.go:632] Waited for 197.05537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094114   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094120   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.094128   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.094138   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.097347   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.293333   27912 request.go:632] Waited for 195.221509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293408   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293418   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.293429   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.293439   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.296348   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:24.296803   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.296819   27912 pod_ready.go:82] duration metric: took 399.882093ms for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.296828   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.493904   27912 request.go:632] Waited for 197.016726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493960   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.493967   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.493971   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.497694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.693075   27912 request.go:632] Waited for 194.571912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693130   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693135   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.693142   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.693146   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.696302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.696899   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.696919   27912 pod_ready.go:82] duration metric: took 400.084608ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.696928   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.893931   27912 request.go:632] Waited for 196.931451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894022   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.894043   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.894046   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.897046   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.093243   27912 request.go:632] Waited for 195.305694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093305   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093310   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.093318   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.093321   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.096337   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.096835   27912 pod_ready.go:93] pod "kube-proxy-r4895" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.096854   27912 pod_ready.go:82] duration metric: took 399.920087ms for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.096864   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.294085   27912 request.go:632] Waited for 197.134763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294155   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294164   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.294174   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.294181   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.297688   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.493811   27912 request.go:632] Waited for 195.37479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493896   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493902   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.493910   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.493914   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.497035   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.497776   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.497796   27912 pod_ready.go:82] duration metric: took 400.925065ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.497810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.693786   27912 request.go:632] Waited for 195.910848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693860   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.693866   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.693870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.697283   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.893336   27912 request.go:632] Waited for 195.363737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893392   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893398   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.893407   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.893417   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.896883   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.897527   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.897547   27912 pod_ready.go:82] duration metric: took 399.728095ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.897560   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.093716   27912 request.go:632] Waited for 196.07568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093770   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093775   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.093783   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.093787   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.097490   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:26.293677   27912 request.go:632] Waited for 195.380903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293724   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293729   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.293736   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.293740   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.296374   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.297059   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.297083   27912 pod_ready.go:82] duration metric: took 399.512498ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.297096   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.493619   27912 request.go:632] Waited for 196.449368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.493698   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.493708   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.496613   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.693570   27912 request.go:632] Waited for 196.314375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693652   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693664   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.693674   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.693683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.696474   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.697001   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.697020   27912 pod_ready.go:82] duration metric: took 399.916866ms for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.697032   27912 pod_ready.go:39] duration metric: took 5.199290508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:26.697048   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:11:26.697102   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:11:26.712884   27912 api_server.go:72] duration metric: took 23.535549754s to wait for apiserver process to appear ...
	I1204 20:11:26.712900   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:11:26.712916   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:11:26.717076   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:11:26.717125   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:11:26.717134   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.717141   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.717145   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.718054   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:11:26.718141   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:11:26.718158   27912 api_server.go:131] duration metric: took 5.25178ms to wait for apiserver health ...
	I1204 20:11:26.718165   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:11:26.893379   27912 request.go:632] Waited for 175.13636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893459   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.893466   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.893472   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.899023   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:26.905500   27912 system_pods.go:59] 24 kube-system pods found
	I1204 20:11:26.905525   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:26.905530   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:26.905534   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:26.905538   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:26.905541   27912 system_pods.go:61] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:26.905545   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:26.905548   27912 system_pods.go:61] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:26.905550   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:26.905554   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:26.905558   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:26.905564   27912 system_pods.go:61] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:26.905569   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:26.905574   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:26.905579   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:26.905588   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:26.905593   27912 system_pods.go:61] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:26.905602   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:26.905607   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:26.905612   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:26.905619   27912 system_pods.go:61] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:26.905622   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:26.905626   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:26.905630   27912 system_pods.go:61] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:26.905634   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:26.905640   27912 system_pods.go:74] duration metric: took 187.469575ms to wait for pod list to return data ...
	I1204 20:11:26.905660   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:11:27.093927   27912 request.go:632] Waited for 188.174644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093986   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093991   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.093998   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.094011   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.097761   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.097902   27912 default_sa.go:45] found service account: "default"
	I1204 20:11:27.097922   27912 default_sa.go:55] duration metric: took 192.253848ms for default service account to be created ...
	I1204 20:11:27.097933   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:11:27.293645   27912 request.go:632] Waited for 195.638628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293720   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293727   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.293736   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.293742   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.299871   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:27.306654   27912 system_pods.go:86] 24 kube-system pods found
	I1204 20:11:27.306676   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:27.306682   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:27.306686   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:27.306689   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:27.306692   27912 system_pods.go:89] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:27.306696   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:27.306699   27912 system_pods.go:89] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:27.306702   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:27.306705   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:27.306709   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:27.306714   27912 system_pods.go:89] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:27.306719   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:27.306724   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:27.306733   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:27.306742   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:27.306748   27912 system_pods.go:89] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:27.306756   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:27.306762   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:27.306770   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:27.306774   27912 system_pods.go:89] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:27.306780   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:27.306784   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:27.306787   27912 system_pods.go:89] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:27.306790   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:27.306796   27912 system_pods.go:126] duration metric: took 208.857473ms to wait for k8s-apps to be running ...
	I1204 20:11:27.306805   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:11:27.306853   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:11:27.321782   27912 system_svc.go:56] duration metric: took 14.969542ms WaitForService to wait for kubelet
	I1204 20:11:27.321804   27912 kubeadm.go:582] duration metric: took 24.144472529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:11:27.321820   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:11:27.493192   27912 request.go:632] Waited for 171.286703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493250   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.493262   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.493266   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.497192   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.498227   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498244   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498254   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498259   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498262   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498265   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498269   27912 node_conditions.go:105] duration metric: took 176.444491ms to run NodePressure ...
	I1204 20:11:27.498283   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:11:27.498303   27912 start.go:255] writing updated cluster config ...
	I1204 20:11:27.498580   27912 ssh_runner.go:195] Run: rm -f paused
	I1204 20:11:27.549391   27912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 20:11:27.551427   27912 out.go:177] * Done! kubectl is now configured to use "ha-739930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.856632760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343310856608697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cc587a9-086f-4c2e-9b05-f600bcc389c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.857473195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2ef7d22-7f18-4a45-a295-2cffbe9c46ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.857544925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2ef7d22-7f18-4a45-a295-2cffbe9c46ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.857861989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2ef7d22-7f18-4a45-a295-2cffbe9c46ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.896861238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00c0e553-53a1-4278-a904-7424fc84d2ae name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.896947241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00c0e553-53a1-4278-a904-7424fc84d2ae name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.898335094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42a0334b-6486-4c37-a693-999cff62d22a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.898969006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343310898931024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42a0334b-6486-4c37-a693-999cff62d22a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.899515528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c556d363-e04c-4535-a111-313a868ee897 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.899592761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c556d363-e04c-4535-a111-313a868ee897 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.899989648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c556d363-e04c-4535-a111-313a868ee897 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.938029159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d10bc42-bad3-4816-a928-e726afed9a01 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.938118657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d10bc42-bad3-4816-a928-e726afed9a01 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.939172677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5889b89-16d1-4476-90c6-86ecb31cd4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.939613002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343310939589519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5889b89-16d1-4476-90c6-86ecb31cd4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.940331063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cacc6a4-5ddb-432c-b85d-dd7de2603ec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.940403092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cacc6a4-5ddb-432c-b85d-dd7de2603ec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.940653466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cacc6a4-5ddb-432c-b85d-dd7de2603ec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.976578706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dd60b2c-b8ce-4af9-a717-e2184c10c8a9 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.976654923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dd60b2c-b8ce-4af9-a717-e2184c10c8a9 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.977631460Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45fe2d75-414f-4464-ad56-bdd4a6c1b809 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.978173211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343310978148720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45fe2d75-414f-4464-ad56-bdd4a6c1b809 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.978778293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1ea2d14-b27e-4143-a05a-a93c0c3b8330 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.978843984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1ea2d14-b27e-4143-a05a-a93c0c3b8330 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:10 ha-739930 crio[665]: time="2024-12-04 20:15:10.979095918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1ea2d14-b27e-4143-a05a-a93c0c3b8330 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c09d55fbc3f94       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8470389e19e5b       busybox-7dff88458-gg7dr
	92f0436c068d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   fdd28652924af       coredns-7c65d6cfc9-7kbgr
	ab16b32e60a72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a639b811aff3b       coredns-7c65d6cfc9-8kztf
	a1496ef67bc6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   235aa20e54db7       storage-provisioner
	f38276fe657c7       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   22f273a6fc170       kindnet-8wsgw
	8643b775b5352       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   30611e2a6fdcc       kube-proxy-tlhfv
	b4a22468ef5bd       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   5f8113a27db24       kube-vip-ha-739930
	325ac1400e34a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a0e82c5e83a21       kube-scheduler-ha-739930
	1fdab5e7f0c11       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   83caff9199eb8       kube-apiserver-ha-739930
	52571ff875ebe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   91df0913316d5       etcd-ha-739930
	c2343748d9b3c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   bccd9e2c06872       kube-controller-manager-ha-739930
	
	
	==> coredns [92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50] <==
	[INFO] 10.244.1.2:60420 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000998s
	[INFO] 10.244.2.2:43602 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198643s
	[INFO] 10.244.2.2:55688 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004203463s
	[INFO] 10.244.2.2:58147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017975s
	[INFO] 10.244.0.4:34390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142716s
	[INFO] 10.244.0.4:33345 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126491s
	[INFO] 10.244.1.2:52771 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534902s
	[INFO] 10.244.1.2:50377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155393s
	[INFO] 10.244.1.2:57617 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204758s
	[INFO] 10.244.1.2:33315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087548s
	[INFO] 10.244.1.2:43721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138913s
	[INFO] 10.244.2.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128945s
	[INFO] 10.244.2.2:39846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141449s
	[INFO] 10.244.0.4:49972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079931s
	[INFO] 10.244.0.4:54249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163883s
	[INFO] 10.244.1.2:50096 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116516s
	[INFO] 10.244.1.2:45073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132387s
	[INFO] 10.244.2.2:49399 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153554s
	[INFO] 10.244.2.2:59645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182375s
	[INFO] 10.244.0.4:58720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128913s
	[INFO] 10.244.0.4:43247 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014397s
	[INFO] 10.244.0.4:41555 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088414s
	[INFO] 10.244.0.4:43722 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065939s
	[INFO] 10.244.1.2:45770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102411s
	[INFO] 10.244.1.2:50474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112012s
	
	
	==> coredns [ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac] <==
	[INFO] 10.244.1.2:40314 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016375s
	[INFO] 10.244.2.2:49280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000323723s
	[INFO] 10.244.2.2:39711 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206446s
	[INFO] 10.244.2.2:58438 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003929293s
	[INFO] 10.244.2.2:51399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159908s
	[INFO] 10.244.2.2:39775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142713s
	[INFO] 10.244.0.4:59240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001795102s
	[INFO] 10.244.0.4:58038 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108734s
	[INFO] 10.244.0.4:54479 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222678s
	[INFO] 10.244.0.4:48445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001109511s
	[INFO] 10.244.0.4:56707 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120069s
	[INFO] 10.244.0.4:44194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:36003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139108s
	[INFO] 10.244.1.2:48175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090843s
	[INFO] 10.244.1.2:54736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072028s
	[INFO] 10.244.2.2:41244 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110768s
	[INFO] 10.244.2.2:58717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088169s
	[INFO] 10.244.0.4:52576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161976s
	[INFO] 10.244.0.4:50935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010896s
	[INFO] 10.244.1.2:40433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160052s
	[INFO] 10.244.1.2:48574 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094093s
	[INFO] 10.244.2.2:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131379s
	[INFO] 10.244.2.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000289898s
	[INFO] 10.244.1.2:59160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148396s
	[INFO] 10.244.1.2:49691 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140675s
	
	
	==> describe nodes <==
	Name:               ha-739930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:08:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-739930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a862467bfb34c3ba59a1a6944c8e8ad
	  System UUID:                4a862467-bfb3-4c3b-a59a-1a6944c8e8ad
	  Boot ID:                    88a12a5a-b072-479a-8944-b6767cbdf4f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gg7dr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-7kbgr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-8kztf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-739930                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-8wsgw                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-739930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-739930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-tlhfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-739930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-739930                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-739930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-739930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-739930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-739930 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	
	
	Name:               ha-739930-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:09:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:12:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-739930-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 309500ff1508404f8337a542897e4a63
	  System UUID:                309500ff-1508-404f-8337-a542897e4a63
	  Boot ID:                    abc62bfe-1148-4265-a781-5ad8762ade09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kx56q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-739930-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-z6v65                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-739930-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-739930-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-gtw7d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-739930-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-739930-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-739930-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-739930-m02 status is now: NodeNotReady
	
	
	Name:               ha-739930-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:11:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-739930-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eddf849e101457c8f603f9f7bb068e3
	  System UUID:                7eddf849-e101-457c-8f60-3f9f7bb068e3
	  Boot ID:                    94b82cc0-8208-45bb-85df-9fba3000dbef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9pz7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-739930-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-d2rvr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-739930-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-739930-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-r4895                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-739930-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-739930-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m12s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m12s)  kubelet          Node ha-739930-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m12s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	
	
	Name:               ha-739930-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_12_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:12:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-739930-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caea6c34853a432f8606c2c81d5d7e80
	  System UUID:                caea6c34-853a-432f-8606-c2c81d5d7e80
	  Boot ID:                    64cbf16d-0924-4d4e-bb2e-e3fb57ad6cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2l856       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-2dnzj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)    kubelet          Node ha-739930-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)    kubelet          Node ha-739930-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)    kubelet          Node ha-739930-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m4s                   node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m3s                   node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m2s                   node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  NodeReady                2m46s (x2 over 2m46s)  kubelet          Node ha-739930-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038376] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818831] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961468] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.569504] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.583210] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.060308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060487] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.188680] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114168] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.247975] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.760825] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.102978] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066053] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.507773] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085425] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.435723] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 4 20:09] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.420810] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246] <==
	{"level":"warn","ts":"2024-12-04T20:15:11.254049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.260816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.264550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.264845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.267100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.269094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.276537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.282634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.289129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.292717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.295558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.305144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.308798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.326028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.342034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.348016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.351568Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dfff839a0574192","rtt":"8.294793ms","error":"dial tcp 192.168.39.216:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-04T20:15:11.351627Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dfff839a0574192","rtt":"794.132µs","error":"dial tcp 192.168.39.216:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-04T20:15:11.359039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.365990Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.386305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.400997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.408812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.424225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:11.457376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:11 up 6 min,  0 users,  load average: 0.39, 0.27, 0.12
	Linux ha-739930 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7] <==
	I1204 20:14:32.871845       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:14:42.875921       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:14:42.876108       1 main.go:301] handling current node
	I1204 20:14:42.876141       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:14:42.876164       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:14:42.876442       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:14:42.876476       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:14:42.876608       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:14:42.876632       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:14:52.876836       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:14:52.876889       1 main.go:301] handling current node
	I1204 20:14:52.876924       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:14:52.876933       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:14:52.877263       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:14:52.877287       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:14:52.877494       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:14:52.877511       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:02.869044       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:02.869284       1 main.go:301] handling current node
	I1204 20:15:02.869336       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:02.869343       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:02.869633       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:02.869654       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:02.869898       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:02.869919       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3] <==
	I1204 20:08:52.109573       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 20:08:52.115869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1204 20:08:52.116893       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 20:08:52.120949       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:08:52.319935       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 20:08:53.401361       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 20:08:53.418287       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 20:08:53.427159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 20:08:57.975080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 20:08:58.071170       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1204 20:11:33.595040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51898: use of closed network connection
	E1204 20:11:33.787246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51926: use of closed network connection
	E1204 20:11:33.961220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51944: use of closed network connection
	E1204 20:11:34.139353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51958: use of closed network connection
	E1204 20:11:34.492487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51978: use of closed network connection
	E1204 20:11:34.660669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51994: use of closed network connection
	E1204 20:11:34.825641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52014: use of closed network connection
	E1204 20:11:35.000850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52034: use of closed network connection
	E1204 20:11:35.295050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52074: use of closed network connection
	E1204 20:11:35.467188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52090: use of closed network connection
	E1204 20:11:35.632176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52096: use of closed network connection
	E1204 20:11:35.802340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52124: use of closed network connection
	E1204 20:11:35.976054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52130: use of closed network connection
	E1204 20:11:36.156331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52148: use of closed network connection
	W1204 20:13:02.138009       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.176 192.168.39.183]
	
	
	==> kube-controller-manager [c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4] <==
	I1204 20:12:05.098063       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-739930-m04" podCIDRs=["10.244.3.0/24"]
	I1204 20:12:05.098353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.099501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.129202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.212844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.605704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:07.219432       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-739930-m04"
	I1204 20:12:07.250173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:08.816441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.034862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.114294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.193601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:15.131792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.187809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:12:25.187897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.200602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:27.234376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:35.291257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:13:22.261174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:13:22.262013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.294239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.349815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.422518ms"
	I1204 20:13:22.353121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.184µs"
	I1204 20:13:23.918547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:27.468391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	
	
	==> kube-proxy [8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:08:59.055359       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:08:59.074919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1204 20:08:59.075054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:08:59.106971       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:08:59.107053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:08:59.107091       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:08:59.110117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:08:59.110853       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:08:59.110911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:08:59.113929       1 config.go:328] "Starting node config controller"
	I1204 20:08:59.113988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:08:59.114597       1 config.go:199] "Starting service config controller"
	I1204 20:08:59.114621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:08:59.114931       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:08:59.114959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:08:59.214563       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:08:59.215004       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:08:59.216196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7] <==
	E1204 20:08:51.687075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.698835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 20:08:51.698950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.756911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 20:08:51.757061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.761020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:08:51.761159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 20:08:54.377656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:11:28.468555       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e79c51d4-80e5-490b-906e-e376195d820e" pod="default/busybox-7dff88458-4zmkp" assumedNode="ha-739930-m02" currentNode="ha-739930-m03"
	E1204 20:11:28.510519       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m03"
	E1204 20:11:28.510990       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e79c51d4-80e5-490b-906e-e376195d820e(default/busybox-7dff88458-4zmkp) was assumed on ha-739930-m03 but assigned to ha-739930-m02" pod="default/busybox-7dff88458-4zmkp"
	E1204 20:11:28.511176       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" pod="default/busybox-7dff88458-4zmkp"
	I1204 20:11:28.511316       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m02"
	I1204 20:11:28.544933       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="5411c4b8-6cb8-493d-8ce1-adcf557c68bc" pod="default/busybox-7dff88458-b94b5" assumedNode="ha-739930" currentNode="ha-739930-m03"
	E1204 20:11:28.557489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-b94b5" node="ha-739930-m03"
	E1204 20:11:28.557560       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5411c4b8-6cb8-493d-8ce1-adcf557c68bc(default/busybox-7dff88458-b94b5) was assumed on ha-739930-m03 but assigned to ha-739930" pod="default/busybox-7dff88458-b94b5"
	E1204 20:11:28.557587       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-b94b5"
	I1204 20:11:28.557614       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-b94b5" node="ha-739930"
	E1204 20:11:30.014314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:11:30.014481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5(default/busybox-7dff88458-gg7dr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gg7dr"
	E1204 20:11:30.015337       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-gg7dr"
	I1204 20:11:30.015401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:12:05.139969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	E1204 20:12:05.140096       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" pod="kube-system/kindnet-kswc6"
	I1204 20:12:05.140125       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	
	
	==> kubelet <==
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:13:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462332    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462375    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465094    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465133    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.466702    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.467091    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469001    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469280    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471311    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471582    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.473913    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.474005    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.358128    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476132    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476169    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.477995    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.478354    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr: (3.999540941s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.251647117s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m03_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-739930 node start m02 -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:08:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:08:11.939431   27912 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:08:11.939545   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939555   27912 out.go:358] Setting ErrFile to fd 2...
	I1204 20:08:11.939562   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939744   27912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:08:11.940314   27912 out.go:352] Setting JSON to false
	I1204 20:08:11.941189   27912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3042,"bootTime":1733339850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:08:11.941293   27912 start.go:139] virtualization: kvm guest
	I1204 20:08:11.944336   27912 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:08:11.945852   27912 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:08:11.945847   27912 notify.go:220] Checking for updates...
	I1204 20:08:11.948662   27912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:08:11.950105   27912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:11.951395   27912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:11.952616   27912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:08:11.953838   27912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:08:11.955060   27912 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:08:11.990494   27912 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 20:08:11.991825   27912 start.go:297] selected driver: kvm2
	I1204 20:08:11.991844   27912 start.go:901] validating driver "kvm2" against <nil>
	I1204 20:08:11.991856   27912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:08:11.992661   27912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:11.992744   27912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:08:12.008005   27912 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:08:12.008178   27912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 20:08:12.008532   27912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:08:12.008571   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:12.008627   27912 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 20:08:12.008639   27912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 20:08:12.008710   27912 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:12.008840   27912 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:12.010621   27912 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:08:12.011905   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:12.011946   27912 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:08:12.011958   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:12.012045   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:12.012061   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:12.012439   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:12.012463   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json: {Name:mk7402f769abcec1c18cda99e23fa60ffac7b3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:12.012602   27912 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:12.012630   27912 start.go:364] duration metric: took 16.073µs to acquireMachinesLock for "ha-739930"
	I1204 20:08:12.012648   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:12.012705   27912 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 20:08:12.014265   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:12.014396   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:12.014435   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:12.028697   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I1204 20:08:12.029103   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:12.029651   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:12.029671   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:12.029950   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:12.030110   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:12.030242   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:12.030391   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:12.030413   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:12.030437   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:12.030469   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030485   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030532   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:12.030550   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030563   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030580   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:12.030594   27912 main.go:141] libmachine: (ha-739930) Calling .PreCreateCheck
	I1204 20:08:12.030896   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:12.031303   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:12.031315   27912 main.go:141] libmachine: (ha-739930) Calling .Create
	I1204 20:08:12.031447   27912 main.go:141] libmachine: (ha-739930) Creating KVM machine...
	I1204 20:08:12.032790   27912 main.go:141] libmachine: (ha-739930) DBG | found existing default KVM network
	I1204 20:08:12.033408   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.033271   27935 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I1204 20:08:12.033431   27912 main.go:141] libmachine: (ha-739930) DBG | created network xml: 
	I1204 20:08:12.033442   27912 main.go:141] libmachine: (ha-739930) DBG | <network>
	I1204 20:08:12.033450   27912 main.go:141] libmachine: (ha-739930) DBG |   <name>mk-ha-739930</name>
	I1204 20:08:12.033465   27912 main.go:141] libmachine: (ha-739930) DBG |   <dns enable='no'/>
	I1204 20:08:12.033475   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033484   27912 main.go:141] libmachine: (ha-739930) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 20:08:12.033497   27912 main.go:141] libmachine: (ha-739930) DBG |     <dhcp>
	I1204 20:08:12.033526   27912 main.go:141] libmachine: (ha-739930) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 20:08:12.033560   27912 main.go:141] libmachine: (ha-739930) DBG |     </dhcp>
	I1204 20:08:12.033571   27912 main.go:141] libmachine: (ha-739930) DBG |   </ip>
	I1204 20:08:12.033582   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033602   27912 main.go:141] libmachine: (ha-739930) DBG | </network>
	I1204 20:08:12.033619   27912 main.go:141] libmachine: (ha-739930) DBG | 
	I1204 20:08:12.038715   27912 main.go:141] libmachine: (ha-739930) DBG | trying to create private KVM network mk-ha-739930 192.168.39.0/24...
	I1204 20:08:12.104228   27912 main.go:141] libmachine: (ha-739930) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.104263   27912 main.go:141] libmachine: (ha-739930) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:12.104273   27912 main.go:141] libmachine: (ha-739930) DBG | private KVM network mk-ha-739930 192.168.39.0/24 created
	I1204 20:08:12.104290   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.104148   27935 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.104318   27912 main.go:141] libmachine: (ha-739930) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:12.357869   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.357760   27935 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa...
	I1204 20:08:12.476934   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476798   27935 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk...
	I1204 20:08:12.476961   27912 main.go:141] libmachine: (ha-739930) DBG | Writing magic tar header
	I1204 20:08:12.476973   27912 main.go:141] libmachine: (ha-739930) DBG | Writing SSH key tar header
	I1204 20:08:12.476980   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476911   27935 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.476989   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930
	I1204 20:08:12.477071   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:12.477126   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 (perms=drwx------)
	I1204 20:08:12.477140   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.477159   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:12.477173   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:12.477183   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:12.477188   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home
	I1204 20:08:12.477199   27912 main.go:141] libmachine: (ha-739930) DBG | Skipping /home - not owner
	I1204 20:08:12.477241   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:12.477265   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:12.477280   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:12.477294   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:12.477311   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:12.477322   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:12.478077   27912 main.go:141] libmachine: (ha-739930) define libvirt domain using xml: 
	I1204 20:08:12.478098   27912 main.go:141] libmachine: (ha-739930) <domain type='kvm'>
	I1204 20:08:12.478108   27912 main.go:141] libmachine: (ha-739930)   <name>ha-739930</name>
	I1204 20:08:12.478120   27912 main.go:141] libmachine: (ha-739930)   <memory unit='MiB'>2200</memory>
	I1204 20:08:12.478128   27912 main.go:141] libmachine: (ha-739930)   <vcpu>2</vcpu>
	I1204 20:08:12.478137   27912 main.go:141] libmachine: (ha-739930)   <features>
	I1204 20:08:12.478144   27912 main.go:141] libmachine: (ha-739930)     <acpi/>
	I1204 20:08:12.478153   27912 main.go:141] libmachine: (ha-739930)     <apic/>
	I1204 20:08:12.478159   27912 main.go:141] libmachine: (ha-739930)     <pae/>
	I1204 20:08:12.478166   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478176   27912 main.go:141] libmachine: (ha-739930)   </features>
	I1204 20:08:12.478183   27912 main.go:141] libmachine: (ha-739930)   <cpu mode='host-passthrough'>
	I1204 20:08:12.478254   27912 main.go:141] libmachine: (ha-739930)   
	I1204 20:08:12.478278   27912 main.go:141] libmachine: (ha-739930)   </cpu>
	I1204 20:08:12.478290   27912 main.go:141] libmachine: (ha-739930)   <os>
	I1204 20:08:12.478313   27912 main.go:141] libmachine: (ha-739930)     <type>hvm</type>
	I1204 20:08:12.478326   27912 main.go:141] libmachine: (ha-739930)     <boot dev='cdrom'/>
	I1204 20:08:12.478335   27912 main.go:141] libmachine: (ha-739930)     <boot dev='hd'/>
	I1204 20:08:12.478344   27912 main.go:141] libmachine: (ha-739930)     <bootmenu enable='no'/>
	I1204 20:08:12.478354   27912 main.go:141] libmachine: (ha-739930)   </os>
	I1204 20:08:12.478361   27912 main.go:141] libmachine: (ha-739930)   <devices>
	I1204 20:08:12.478371   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='cdrom'>
	I1204 20:08:12.478384   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/boot2docker.iso'/>
	I1204 20:08:12.478394   27912 main.go:141] libmachine: (ha-739930)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:12.478401   27912 main.go:141] libmachine: (ha-739930)       <readonly/>
	I1204 20:08:12.478416   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478430   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='disk'>
	I1204 20:08:12.478442   27912 main.go:141] libmachine: (ha-739930)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:12.478457   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk'/>
	I1204 20:08:12.478467   27912 main.go:141] libmachine: (ha-739930)       <target dev='hda' bus='virtio'/>
	I1204 20:08:12.478475   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478490   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478503   27912 main.go:141] libmachine: (ha-739930)       <source network='mk-ha-739930'/>
	I1204 20:08:12.478512   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478520   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478530   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478542   27912 main.go:141] libmachine: (ha-739930)       <source network='default'/>
	I1204 20:08:12.478552   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478599   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478617   27912 main.go:141] libmachine: (ha-739930)     <serial type='pty'>
	I1204 20:08:12.478622   27912 main.go:141] libmachine: (ha-739930)       <target port='0'/>
	I1204 20:08:12.478628   27912 main.go:141] libmachine: (ha-739930)     </serial>
	I1204 20:08:12.478636   27912 main.go:141] libmachine: (ha-739930)     <console type='pty'>
	I1204 20:08:12.478641   27912 main.go:141] libmachine: (ha-739930)       <target type='serial' port='0'/>
	I1204 20:08:12.478650   27912 main.go:141] libmachine: (ha-739930)     </console>
	I1204 20:08:12.478654   27912 main.go:141] libmachine: (ha-739930)     <rng model='virtio'>
	I1204 20:08:12.478660   27912 main.go:141] libmachine: (ha-739930)       <backend model='random'>/dev/random</backend>
	I1204 20:08:12.478666   27912 main.go:141] libmachine: (ha-739930)     </rng>
	I1204 20:08:12.478671   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478674   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478679   27912 main.go:141] libmachine: (ha-739930)   </devices>
	I1204 20:08:12.478685   27912 main.go:141] libmachine: (ha-739930) </domain>
	I1204 20:08:12.478691   27912 main.go:141] libmachine: (ha-739930) 
	I1204 20:08:12.482962   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:1f:34:29 in network default
	I1204 20:08:12.483451   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:12.483468   27912 main.go:141] libmachine: (ha-739930) Ensuring networks are active...
	I1204 20:08:12.484073   27912 main.go:141] libmachine: (ha-739930) Ensuring network default is active
	I1204 20:08:12.484443   27912 main.go:141] libmachine: (ha-739930) Ensuring network mk-ha-739930 is active
	I1204 20:08:12.485051   27912 main.go:141] libmachine: (ha-739930) Getting domain xml...
	I1204 20:08:12.485709   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:13.663232   27912 main.go:141] libmachine: (ha-739930) Waiting to get IP...
	I1204 20:08:13.663928   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.664244   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.664289   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.664239   27935 retry.go:31] will retry after 311.107761ms: waiting for machine to come up
	I1204 20:08:13.976518   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.976875   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.976897   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.976832   27935 retry.go:31] will retry after 302.848525ms: waiting for machine to come up
	I1204 20:08:14.281431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.281818   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.281846   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.281773   27935 retry.go:31] will retry after 460.768304ms: waiting for machine to come up
	I1204 20:08:14.744364   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.744813   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.744835   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.744754   27935 retry.go:31] will retry after 399.590847ms: waiting for machine to come up
	I1204 20:08:15.146387   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.146887   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.146911   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.146850   27935 retry.go:31] will retry after 733.547268ms: waiting for machine to come up
	I1204 20:08:15.882052   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.882481   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.882509   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.882450   27935 retry.go:31] will retry after 598.816129ms: waiting for machine to come up
	I1204 20:08:16.483323   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:16.483724   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:16.483766   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:16.483669   27935 retry.go:31] will retry after 816.886511ms: waiting for machine to come up
	I1204 20:08:17.302385   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:17.302850   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:17.303157   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:17.303086   27935 retry.go:31] will retry after 1.092347228s: waiting for machine to come up
	I1204 20:08:18.397513   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:18.397955   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:18.397979   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:18.397908   27935 retry.go:31] will retry after 1.349280463s: waiting for machine to come up
	I1204 20:08:19.748591   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:19.749086   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:19.749107   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:19.749051   27935 retry.go:31] will retry after 1.929176971s: waiting for machine to come up
	I1204 20:08:21.681322   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:21.681787   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:21.681821   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:21.681719   27935 retry.go:31] will retry after 2.034104658s: waiting for machine to come up
	I1204 20:08:23.717496   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:23.717880   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:23.717910   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:23.717836   27935 retry.go:31] will retry after 2.982891394s: waiting for machine to come up
	I1204 20:08:26.703937   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:26.704406   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:26.704442   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:26.704358   27935 retry.go:31] will retry after 2.968408416s: waiting for machine to come up
	I1204 20:08:29.675768   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:29.676304   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:29.676332   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:29.676260   27935 retry.go:31] will retry after 5.520024319s: waiting for machine to come up
	I1204 20:08:35.199569   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200041   27912 main.go:141] libmachine: (ha-739930) Found IP for machine: 192.168.39.183
	I1204 20:08:35.200065   27912 main.go:141] libmachine: (ha-739930) Reserving static IP address...
	I1204 20:08:35.200092   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has current primary IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200437   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find host DHCP lease matching {name: "ha-739930", mac: "52:54:00:b9:91:f7", ip: "192.168.39.183"} in network mk-ha-739930
	I1204 20:08:35.268817   27912 main.go:141] libmachine: (ha-739930) Reserved static IP address: 192.168.39.183
	I1204 20:08:35.268847   27912 main.go:141] libmachine: (ha-739930) Waiting for SSH to be available...
	I1204 20:08:35.268856   27912 main.go:141] libmachine: (ha-739930) DBG | Getting to WaitForSSH function...
	I1204 20:08:35.271480   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271869   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.271895   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271987   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH client type: external
	I1204 20:08:35.272004   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa (-rw-------)
	I1204 20:08:35.272069   27912 main.go:141] libmachine: (ha-739930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:08:35.272087   27912 main.go:141] libmachine: (ha-739930) DBG | About to run SSH command:
	I1204 20:08:35.272103   27912 main.go:141] libmachine: (ha-739930) DBG | exit 0
	I1204 20:08:35.395351   27912 main.go:141] libmachine: (ha-739930) DBG | SSH cmd err, output: <nil>: 
	I1204 20:08:35.395650   27912 main.go:141] libmachine: (ha-739930) KVM machine creation complete!
	I1204 20:08:35.395986   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:35.396534   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396731   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396857   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:08:35.396871   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:35.398039   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:08:35.398051   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:08:35.398055   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:08:35.398060   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.400170   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400525   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.400571   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400650   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.400812   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.400979   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.401117   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.401289   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.401492   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.401507   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:08:35.502303   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.502340   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:08:35.502352   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.504752   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505142   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.505165   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505360   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.505545   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505676   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505789   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.505915   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.506073   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.506082   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:08:35.608173   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:08:35.608233   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:08:35.608240   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:08:35.608247   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608464   27912 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:08:35.608480   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608679   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.611354   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611746   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.611772   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611904   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.612062   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612200   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612312   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.612460   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.612630   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.612642   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:08:35.730422   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:08:35.730456   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.732817   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733139   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.733168   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733310   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.733480   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733651   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733802   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.733983   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.734154   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.734171   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:08:35.843780   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.843821   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:08:35.843865   27912 buildroot.go:174] setting up certificates
	I1204 20:08:35.843880   27912 provision.go:84] configureAuth start
	I1204 20:08:35.843894   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.844232   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:35.847046   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847366   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.847411   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847570   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.849830   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850112   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.850131   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850320   27912 provision.go:143] copyHostCerts
	I1204 20:08:35.850348   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850382   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:08:35.850391   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850460   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:08:35.850567   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850595   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:08:35.850604   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850645   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:08:35.850723   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850741   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:08:35.850748   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850772   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:08:35.850823   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:08:35.983720   27912 provision.go:177] copyRemoteCerts
	I1204 20:08:35.983786   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:08:35.983810   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.986241   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986583   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.986614   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986772   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.986960   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.987093   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.987240   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.068879   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:08:36.068950   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1204 20:08:36.091202   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:08:36.091259   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:08:36.112918   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:08:36.112998   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:08:36.134856   27912 provision.go:87] duration metric: took 290.963844ms to configureAuth
	I1204 20:08:36.134887   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:08:36.135063   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:36.135153   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.137760   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138113   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.138138   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138342   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.138505   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138658   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138779   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.138924   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.139114   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.139131   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:08:36.346218   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:08:36.346255   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:08:36.346283   27912 main.go:141] libmachine: (ha-739930) Calling .GetURL
	I1204 20:08:36.347448   27912 main.go:141] libmachine: (ha-739930) DBG | Using libvirt version 6000000
	I1204 20:08:36.349418   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349723   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.349742   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349920   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:08:36.349936   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:08:36.349943   27912 client.go:171] duration metric: took 24.3195237s to LocalClient.Create
	I1204 20:08:36.349963   27912 start.go:167] duration metric: took 24.319574814s to libmachine.API.Create "ha-739930"
	I1204 20:08:36.349976   27912 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:08:36.349991   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:08:36.350013   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.350205   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:08:36.350228   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.351979   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352286   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.352313   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352437   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.352594   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.352706   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.352816   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.432460   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:08:36.436012   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:08:36.436028   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:08:36.436089   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:08:36.436188   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:08:36.436201   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:08:36.436304   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:08:36.444678   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:36.467397   27912 start.go:296] duration metric: took 117.407014ms for postStartSetup
	I1204 20:08:36.467437   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:36.467977   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.470186   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470558   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.470586   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470798   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:36.470974   27912 start.go:128] duration metric: took 24.458260215s to createHost
	I1204 20:08:36.470996   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.472973   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473263   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.473284   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473418   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.473574   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473716   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473887   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.474035   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.474202   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.474217   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:08:36.575008   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342916.551867748
	
	I1204 20:08:36.575023   27912 fix.go:216] guest clock: 1733342916.551867748
	I1204 20:08:36.575030   27912 fix.go:229] Guest: 2024-12-04 20:08:36.551867748 +0000 UTC Remote: 2024-12-04 20:08:36.470986638 +0000 UTC m=+24.568358011 (delta=80.88111ms)
	I1204 20:08:36.575056   27912 fix.go:200] guest clock delta is within tolerance: 80.88111ms
	I1204 20:08:36.575080   27912 start.go:83] releasing machines lock for "ha-739930", held for 24.56242194s
	I1204 20:08:36.575103   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.575310   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.577787   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578087   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.578125   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578233   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578645   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578807   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578883   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:08:36.578924   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.579001   27912 ssh_runner.go:195] Run: cat /version.json
	I1204 20:08:36.579018   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.581456   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581787   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.581809   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581864   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581930   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582100   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582239   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582276   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.582299   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.582396   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.582566   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582713   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582863   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582989   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.675618   27912 ssh_runner.go:195] Run: systemctl --version
	I1204 20:08:36.681185   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:08:36.833908   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:08:36.839964   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:08:36.840024   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:08:36.855758   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:08:36.855780   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:08:36.855848   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:08:36.870692   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:08:36.883541   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:08:36.883596   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:08:36.896118   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:08:36.908920   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:08:37.025056   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:08:37.187310   27912 docker.go:233] disabling docker service ...
	I1204 20:08:37.187365   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:08:37.200934   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:08:37.212871   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:08:37.332646   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:08:37.440309   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:08:37.453353   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:08:37.470970   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:08:37.471030   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.480927   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:08:37.481009   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.491149   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.500802   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.510374   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:08:37.520079   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.529955   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.545993   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.555622   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:08:37.564180   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:08:37.564228   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:08:37.576296   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:08:37.585144   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:37.693931   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:08:37.777449   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:08:37.777509   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:08:37.781553   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:08:37.781604   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:08:37.784811   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:08:37.822634   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:08:37.822702   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.848190   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.873431   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:08:37.874606   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:37.877259   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877590   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:37.877619   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877786   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:08:37.881175   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:37.892903   27912 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:08:37.892996   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:37.893068   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:37.926070   27912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 20:08:37.926123   27912 ssh_runner.go:195] Run: which lz4
	I1204 20:08:37.929507   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 20:08:37.929636   27912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:08:37.933391   27912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:08:37.933415   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 20:08:39.139354   27912 crio.go:462] duration metric: took 1.209791733s to copy over tarball
	I1204 20:08:39.139460   27912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:08:41.096167   27912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.956678939s)
	I1204 20:08:41.096191   27912 crio.go:469] duration metric: took 1.956790325s to extract the tarball
	I1204 20:08:41.096199   27912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:08:41.132019   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:41.174932   27912 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:08:41.174955   27912 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:08:41.174962   27912 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:08:41.175056   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:08:41.175118   27912 ssh_runner.go:195] Run: crio config
	I1204 20:08:41.217894   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:41.217917   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:41.217927   27912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:08:41.217952   27912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:08:41.218081   27912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:08:41.218111   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:08:41.218165   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:08:41.233083   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:08:41.233174   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:08:41.233229   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:08:41.242410   27912 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:08:41.242479   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:08:41.251172   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:08:41.266346   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:08:41.281669   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:08:41.296753   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 20:08:41.311501   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:08:41.314975   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:41.325862   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:41.458198   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:08:41.473798   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:08:41.473814   27912 certs.go:194] generating shared ca certs ...
	I1204 20:08:41.473829   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.473951   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:08:41.473998   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:08:41.474012   27912 certs.go:256] generating profile certs ...
	I1204 20:08:41.474071   27912 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:08:41.474104   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt with IP's: []
	I1204 20:08:41.679553   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt ...
	I1204 20:08:41.679577   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt: {Name:mk3cb32626a63b25e9bcb53dbf57982e8c59176a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679756   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key ...
	I1204 20:08:41.679770   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key: {Name:mk5952f9a719bbb3868bb675769b7b60346c6fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679866   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395
	I1204 20:08:41.679888   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1204 20:08:42.002083   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 ...
	I1204 20:08:42.002109   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395: {Name:mk5f9c87f1a9d17c216fb1ba76a871a4d200a2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002298   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 ...
	I1204 20:08:42.002314   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395: {Name:mkbc19c0135d212682268a777ef3380b2e19b0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002409   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:08:42.002519   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:08:42.002573   27912 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:08:42.002587   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt with IP's: []
	I1204 20:08:42.211018   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt ...
	I1204 20:08:42.211049   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt: {Name:mkf1a9add2f9343bc4f70a7fa70f135cc4d00f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211250   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key ...
	I1204 20:08:42.211265   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key: {Name:mkb8fc6229780db95a674383629b517d0cfa035d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211361   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:08:42.211400   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:08:42.211422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:08:42.211442   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:08:42.211459   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:08:42.211477   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:08:42.211491   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:08:42.211508   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:08:42.211575   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:08:42.211622   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:08:42.211635   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:08:42.211671   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:08:42.211703   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:08:42.211734   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:08:42.211789   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:42.211826   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.211847   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.211866   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.212397   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:08:42.248354   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:08:42.283210   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:08:42.315759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:08:42.337377   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:08:42.359236   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:08:42.380567   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:08:42.402068   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:08:42.423840   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:08:42.445088   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:08:42.466154   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:08:42.487261   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:08:42.502237   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:08:42.507399   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:08:42.517386   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521412   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521456   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.526682   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:08:42.536595   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:08:42.546422   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550778   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550834   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.556366   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:08:42.567110   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:08:42.577648   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581927   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581970   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.587418   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:08:42.598017   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:08:42.601905   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:08:42.601960   27912 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:42.602029   27912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:08:42.602067   27912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:08:42.638904   27912 cri.go:89] found id: ""
	I1204 20:08:42.638964   27912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:08:42.648459   27912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:08:42.657551   27912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:08:42.666519   27912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:08:42.666536   27912 kubeadm.go:157] found existing configuration files:
	
	I1204 20:08:42.666571   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:08:42.675036   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:08:42.675086   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:08:42.683928   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:08:42.692253   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:08:42.692304   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:08:42.701014   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.709166   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:08:42.709204   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.718070   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:08:42.726526   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:08:42.726584   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:08:42.735312   27912 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 20:08:42.947971   27912 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 20:08:54.006500   27912 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 20:08:54.006550   27912 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 20:08:54.006630   27912 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 20:08:54.006748   27912 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 20:08:54.006901   27912 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 20:08:54.006999   27912 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 20:08:54.008316   27912 out.go:235]   - Generating certificates and keys ...
	I1204 20:08:54.008397   27912 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 20:08:54.008459   27912 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 20:08:54.008548   27912 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 20:08:54.008635   27912 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 20:08:54.008695   27912 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 20:08:54.008737   27912 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 20:08:54.008784   27912 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 20:08:54.008879   27912 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.008924   27912 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 20:08:54.009023   27912 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.009133   27912 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 20:08:54.009245   27912 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 20:08:54.009321   27912 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 20:08:54.009403   27912 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 20:08:54.009487   27912 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 20:08:54.009570   27912 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 20:08:54.009644   27912 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 20:08:54.009733   27912 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 20:08:54.009810   27912 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 20:08:54.009903   27912 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 20:08:54.009962   27912 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 20:08:54.011358   27912 out.go:235]   - Booting up control plane ...
	I1204 20:08:54.011484   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 20:08:54.011569   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 20:08:54.011635   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 20:08:54.011728   27912 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 20:08:54.011808   27912 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 20:08:54.011842   27912 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 20:08:54.011948   27912 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 20:08:54.012038   27912 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 20:08:54.012094   27912 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001462808s
	I1204 20:08:54.012172   27912 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 20:08:54.012262   27912 kubeadm.go:310] [api-check] The API server is healthy after 6.02019816s
	I1204 20:08:54.012392   27912 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 20:08:54.012536   27912 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 20:08:54.012619   27912 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 20:08:54.012799   27912 kubeadm.go:310] [mark-control-plane] Marking the node ha-739930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 20:08:54.012886   27912 kubeadm.go:310] [bootstrap-token] Using token: borrl1.p9d68mzgpldkynyz
	I1204 20:08:54.013953   27912 out.go:235]   - Configuring RBAC rules ...
	I1204 20:08:54.014046   27912 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 20:08:54.014140   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 20:08:54.014307   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 20:08:54.014473   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 20:08:54.014571   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 20:08:54.014670   27912 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 20:08:54.014826   27912 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 20:08:54.014865   27912 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 20:08:54.014923   27912 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 20:08:54.014933   27912 kubeadm.go:310] 
	I1204 20:08:54.015010   27912 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 20:08:54.015019   27912 kubeadm.go:310] 
	I1204 20:08:54.015144   27912 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 20:08:54.015156   27912 kubeadm.go:310] 
	I1204 20:08:54.015195   27912 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 20:08:54.015270   27912 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 20:08:54.015320   27912 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 20:08:54.015326   27912 kubeadm.go:310] 
	I1204 20:08:54.015392   27912 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 20:08:54.015402   27912 kubeadm.go:310] 
	I1204 20:08:54.015442   27912 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 20:08:54.015451   27912 kubeadm.go:310] 
	I1204 20:08:54.015493   27912 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 20:08:54.015582   27912 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 20:08:54.015675   27912 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 20:08:54.015684   27912 kubeadm.go:310] 
	I1204 20:08:54.015786   27912 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 20:08:54.015895   27912 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 20:08:54.015905   27912 kubeadm.go:310] 
	I1204 20:08:54.016003   27912 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016093   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 20:08:54.016113   27912 kubeadm.go:310] 	--control-plane 
	I1204 20:08:54.016117   27912 kubeadm.go:310] 
	I1204 20:08:54.016205   27912 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 20:08:54.016217   27912 kubeadm.go:310] 
	I1204 20:08:54.016293   27912 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016397   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 20:08:54.016411   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:54.016416   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:54.017939   27912 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 20:08:54.019064   27912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 20:08:54.023950   27912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 20:08:54.023967   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 20:08:54.041186   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 20:08:54.359013   27912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 20:08:54.359083   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:54.359121   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930 minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=true
	I1204 20:08:54.395990   27912 ops.go:34] apiserver oom_adj: -16
	I1204 20:08:54.548524   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.049558   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.548661   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.048619   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.549070   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.048848   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.549554   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.048830   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.161390   27912 kubeadm.go:1113] duration metric: took 3.80235484s to wait for elevateKubeSystemPrivileges
	I1204 20:08:58.161423   27912 kubeadm.go:394] duration metric: took 15.559467425s to StartCluster
	I1204 20:08:58.161444   27912 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.161514   27912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.162310   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.162533   27912 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:58.162562   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:08:58.162544   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 20:08:58.162557   27912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 20:08:58.162652   27912 addons.go:69] Setting storage-provisioner=true in profile "ha-739930"
	I1204 20:08:58.162661   27912 addons.go:69] Setting default-storageclass=true in profile "ha-739930"
	I1204 20:08:58.162674   27912 addons.go:234] Setting addon storage-provisioner=true in "ha-739930"
	I1204 20:08:58.162693   27912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-739930"
	I1204 20:08:58.162706   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.162718   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:58.163133   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163137   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163158   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.163161   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.177830   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I1204 20:08:58.177986   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I1204 20:08:58.178299   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178427   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178779   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.178807   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.178981   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.179001   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.179143   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179321   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179506   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.179650   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.179676   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.181633   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.181895   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:08:58.182308   27912 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 20:08:58.182493   27912 addons.go:234] Setting addon default-storageclass=true in "ha-739930"
	I1204 20:08:58.182532   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.182790   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.182824   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.194517   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I1204 20:08:58.194972   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.195484   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.195512   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.195872   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.196070   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.197298   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1204 20:08:58.197610   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.197777   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.198114   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.198138   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.198429   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.198834   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.198862   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.199309   27912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:08:58.200430   27912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.200452   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 20:08:58.200469   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.203367   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203781   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.203808   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203943   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.204099   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.204233   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.204358   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.213101   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 20:08:58.213504   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.214031   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.214059   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.214380   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.214549   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.216016   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.216199   27912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.216211   27912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 20:08:58.216223   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.218960   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219280   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.219317   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219479   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.219661   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.219835   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.219997   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.277316   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 20:08:58.357820   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.374108   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.721001   27912 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 20:08:59.051895   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051921   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.051951   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051972   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052204   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052222   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052231   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052241   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052293   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052317   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052325   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052322   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.052332   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052462   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052473   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053776   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.053794   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.053805   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053870   27912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 20:08:59.053894   27912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 20:08:59.053992   27912 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 20:08:59.054003   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.054010   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.054014   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.064602   27912 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 20:08:59.065317   27912 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 20:08:59.065335   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.065347   27912 round_trippers.go:473]     Content-Type: application/json
	I1204 20:08:59.065354   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.065359   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.068638   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:08:59.068754   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.068772   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.068971   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.068989   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.069005   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.071139   27912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 20:08:59.072109   27912 addons.go:510] duration metric: took 909.550558ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 20:08:59.072142   27912 start.go:246] waiting for cluster config update ...
	I1204 20:08:59.072151   27912 start.go:255] writing updated cluster config ...
	I1204 20:08:59.073463   27912 out.go:201] 
	I1204 20:08:59.074725   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:59.074813   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.076300   27912 out.go:177] * Starting "ha-739930-m02" control-plane node in "ha-739930" cluster
	I1204 20:08:59.077339   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:59.077359   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:59.077447   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:59.077461   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:59.077541   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.077723   27912 start.go:360] acquireMachinesLock for ha-739930-m02: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:59.077776   27912 start.go:364] duration metric: took 30.982µs to acquireMachinesLock for "ha-739930-m02"
	I1204 20:08:59.077798   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:59.077880   27912 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 20:08:59.079261   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:59.079340   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:59.079368   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:59.093684   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1204 20:08:59.094078   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:59.094558   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:59.094579   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:59.094913   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:59.095089   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:08:59.095236   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:08:59.095406   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:59.095437   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:59.095465   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:59.095493   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095505   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095551   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:59.095568   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095579   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095595   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:59.095602   27912 main.go:141] libmachine: (ha-739930-m02) Calling .PreCreateCheck
	I1204 20:08:59.095756   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:08:59.096074   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:59.096086   27912 main.go:141] libmachine: (ha-739930-m02) Calling .Create
	I1204 20:08:59.096214   27912 main.go:141] libmachine: (ha-739930-m02) Creating KVM machine...
	I1204 20:08:59.097249   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing default KVM network
	I1204 20:08:59.097426   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing private KVM network mk-ha-739930
	I1204 20:08:59.097515   27912 main.go:141] libmachine: (ha-739930-m02) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.097549   27912 main.go:141] libmachine: (ha-739930-m02) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:59.097603   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.097507   28291 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.097713   27912 main.go:141] libmachine: (ha-739930-m02) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:59.334730   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.334621   28291 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa...
	I1204 20:08:59.653553   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653411   28291 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk...
	I1204 20:08:59.653587   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing magic tar header
	I1204 20:08:59.653647   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing SSH key tar header
	I1204 20:08:59.653678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653561   28291 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.653704   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 (perms=drwx------)
	I1204 20:08:59.653726   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02
	I1204 20:08:59.653737   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:59.653758   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:59.653773   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:59.653785   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.653796   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:59.653813   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:59.653825   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:59.653838   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:59.653850   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:08:59.653865   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:59.653875   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:59.653889   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home
	I1204 20:08:59.653903   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Skipping /home - not owner
	I1204 20:08:59.654725   27912 main.go:141] libmachine: (ha-739930-m02) define libvirt domain using xml: 
	I1204 20:08:59.654740   27912 main.go:141] libmachine: (ha-739930-m02) <domain type='kvm'>
	I1204 20:08:59.654751   27912 main.go:141] libmachine: (ha-739930-m02)   <name>ha-739930-m02</name>
	I1204 20:08:59.654763   27912 main.go:141] libmachine: (ha-739930-m02)   <memory unit='MiB'>2200</memory>
	I1204 20:08:59.654775   27912 main.go:141] libmachine: (ha-739930-m02)   <vcpu>2</vcpu>
	I1204 20:08:59.654788   27912 main.go:141] libmachine: (ha-739930-m02)   <features>
	I1204 20:08:59.654796   27912 main.go:141] libmachine: (ha-739930-m02)     <acpi/>
	I1204 20:08:59.654806   27912 main.go:141] libmachine: (ha-739930-m02)     <apic/>
	I1204 20:08:59.654818   27912 main.go:141] libmachine: (ha-739930-m02)     <pae/>
	I1204 20:08:59.654837   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.654847   27912 main.go:141] libmachine: (ha-739930-m02)   </features>
	I1204 20:08:59.654851   27912 main.go:141] libmachine: (ha-739930-m02)   <cpu mode='host-passthrough'>
	I1204 20:08:59.654858   27912 main.go:141] libmachine: (ha-739930-m02)   
	I1204 20:08:59.654862   27912 main.go:141] libmachine: (ha-739930-m02)   </cpu>
	I1204 20:08:59.654870   27912 main.go:141] libmachine: (ha-739930-m02)   <os>
	I1204 20:08:59.654874   27912 main.go:141] libmachine: (ha-739930-m02)     <type>hvm</type>
	I1204 20:08:59.654882   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='cdrom'/>
	I1204 20:08:59.654892   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='hd'/>
	I1204 20:08:59.654905   27912 main.go:141] libmachine: (ha-739930-m02)     <bootmenu enable='no'/>
	I1204 20:08:59.654916   27912 main.go:141] libmachine: (ha-739930-m02)   </os>
	I1204 20:08:59.654941   27912 main.go:141] libmachine: (ha-739930-m02)   <devices>
	I1204 20:08:59.654966   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='cdrom'>
	I1204 20:08:59.654982   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/boot2docker.iso'/>
	I1204 20:08:59.654997   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:59.655013   27912 main.go:141] libmachine: (ha-739930-m02)       <readonly/>
	I1204 20:08:59.655023   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655035   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='disk'>
	I1204 20:08:59.655049   27912 main.go:141] libmachine: (ha-739930-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:59.655067   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk'/>
	I1204 20:08:59.655083   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hda' bus='virtio'/>
	I1204 20:08:59.655095   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655104   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655117   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='mk-ha-739930'/>
	I1204 20:08:59.655129   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655141   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655157   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655176   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='default'/>
	I1204 20:08:59.655187   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655199   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655208   27912 main.go:141] libmachine: (ha-739930-m02)     <serial type='pty'>
	I1204 20:08:59.655231   27912 main.go:141] libmachine: (ha-739930-m02)       <target port='0'/>
	I1204 20:08:59.655250   27912 main.go:141] libmachine: (ha-739930-m02)     </serial>
	I1204 20:08:59.655268   27912 main.go:141] libmachine: (ha-739930-m02)     <console type='pty'>
	I1204 20:08:59.655284   27912 main.go:141] libmachine: (ha-739930-m02)       <target type='serial' port='0'/>
	I1204 20:08:59.655295   27912 main.go:141] libmachine: (ha-739930-m02)     </console>
	I1204 20:08:59.655302   27912 main.go:141] libmachine: (ha-739930-m02)     <rng model='virtio'>
	I1204 20:08:59.655315   27912 main.go:141] libmachine: (ha-739930-m02)       <backend model='random'>/dev/random</backend>
	I1204 20:08:59.655321   27912 main.go:141] libmachine: (ha-739930-m02)     </rng>
	I1204 20:08:59.655329   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655333   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655340   27912 main.go:141] libmachine: (ha-739930-m02)   </devices>
	I1204 20:08:59.655345   27912 main.go:141] libmachine: (ha-739930-m02) </domain>
	I1204 20:08:59.655362   27912 main.go:141] libmachine: (ha-739930-m02) 
	I1204 20:08:59.661230   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:69:55:bb in network default
	I1204 20:08:59.661784   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:08:59.661806   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring networks are active...
	I1204 20:08:59.662333   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network default is active
	I1204 20:08:59.662568   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network mk-ha-739930 is active
	I1204 20:08:59.662825   27912 main.go:141] libmachine: (ha-739930-m02) Getting domain xml...
	I1204 20:08:59.663438   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:09:00.864454   27912 main.go:141] libmachine: (ha-739930-m02) Waiting to get IP...
	I1204 20:09:00.865262   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:00.865678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:00.865706   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:00.865644   28291 retry.go:31] will retry after 202.440812ms: waiting for machine to come up
	I1204 20:09:01.070038   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.070521   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.070539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.070483   28291 retry.go:31] will retry after 379.96661ms: waiting for machine to come up
	I1204 20:09:01.452279   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.452670   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.452703   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.452620   28291 retry.go:31] will retry after 448.23669ms: waiting for machine to come up
	I1204 20:09:01.902848   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.903274   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.903301   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.903230   28291 retry.go:31] will retry after 590.399252ms: waiting for machine to come up
	I1204 20:09:02.495129   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:02.495572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:02.495602   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:02.495522   28291 retry.go:31] will retry after 535.882434ms: waiting for machine to come up
	I1204 20:09:03.033125   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.033552   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.033572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.033531   28291 retry.go:31] will retry after 698.598885ms: waiting for machine to come up
	I1204 20:09:03.733894   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.734321   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.734351   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.734276   28291 retry.go:31] will retry after 1.177854854s: waiting for machine to come up
	I1204 20:09:04.914541   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:04.914975   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:04.915005   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:04.914934   28291 retry.go:31] will retry after 1.093246259s: waiting for machine to come up
	I1204 20:09:06.010091   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:06.010517   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:06.010543   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:06.010478   28291 retry.go:31] will retry after 1.613080477s: waiting for machine to come up
	I1204 20:09:07.624874   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:07.625335   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:07.625364   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:07.625313   28291 retry.go:31] will retry after 2.249296346s: waiting for machine to come up
	I1204 20:09:09.875662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:09.876187   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:09.876218   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:09.876124   28291 retry.go:31] will retry after 2.42642151s: waiting for machine to come up
	I1204 20:09:12.305633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:12.306060   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:12.306085   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:12.306030   28291 retry.go:31] will retry after 2.221078432s: waiting for machine to come up
	I1204 20:09:14.529048   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:14.529558   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:14.529585   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:14.529522   28291 retry.go:31] will retry after 2.966790247s: waiting for machine to come up
	I1204 20:09:17.499601   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:17.500108   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:17.500137   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:17.500054   28291 retry.go:31] will retry after 4.394406199s: waiting for machine to come up
	I1204 20:09:21.898072   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898515   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898531   27912 main.go:141] libmachine: (ha-739930-m02) Found IP for machine: 192.168.39.216
	I1204 20:09:21.898543   27912 main.go:141] libmachine: (ha-739930-m02) Reserving static IP address...
	I1204 20:09:21.899016   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find host DHCP lease matching {name: "ha-739930-m02", mac: "52:54:00:91:b2:c1", ip: "192.168.39.216"} in network mk-ha-739930
	I1204 20:09:21.970499   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Getting to WaitForSSH function...
	I1204 20:09:21.970531   27912 main.go:141] libmachine: (ha-739930-m02) Reserved static IP address: 192.168.39.216
	I1204 20:09:21.970544   27912 main.go:141] libmachine: (ha-739930-m02) Waiting for SSH to be available...
	I1204 20:09:21.972885   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973270   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:21.973299   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973444   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH client type: external
	I1204 20:09:21.973472   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa (-rw-------)
	I1204 20:09:21.973507   27912 main.go:141] libmachine: (ha-739930-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:09:21.973526   27912 main.go:141] libmachine: (ha-739930-m02) DBG | About to run SSH command:
	I1204 20:09:21.973534   27912 main.go:141] libmachine: (ha-739930-m02) DBG | exit 0
	I1204 20:09:22.099805   27912 main.go:141] libmachine: (ha-739930-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 20:09:22.100058   27912 main.go:141] libmachine: (ha-739930-m02) KVM machine creation complete!
	I1204 20:09:22.100415   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:22.101293   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101487   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101644   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:09:22.101669   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetState
	I1204 20:09:22.102974   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:09:22.102992   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:09:22.103000   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:09:22.103008   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.105264   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105562   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.105595   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105759   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.105924   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106031   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106146   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.106307   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.106556   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.106582   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:09:22.210652   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.210674   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:09:22.210689   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.213316   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.213662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.213923   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214102   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214252   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.214405   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.214561   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.214571   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:09:22.320078   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:09:22.320145   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:09:22.320155   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:09:22.320176   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320420   27912 buildroot.go:166] provisioning hostname "ha-739930-m02"
	I1204 20:09:22.320451   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320599   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.322962   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323306   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.323331   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323525   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.323704   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323837   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323937   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.324095   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.324248   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.324260   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m02 && echo "ha-739930-m02" | sudo tee /etc/hostname
	I1204 20:09:22.442684   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m02
	
	I1204 20:09:22.442712   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.445503   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.445841   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.445866   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.446028   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.446227   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446390   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446547   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.446707   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.446886   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.446908   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:09:22.560132   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.560177   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:09:22.560210   27912 buildroot.go:174] setting up certificates
	I1204 20:09:22.560227   27912 provision.go:84] configureAuth start
	I1204 20:09:22.560246   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.560519   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:22.563054   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563443   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.563470   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563600   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.565613   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.565936   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.565961   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.566074   27912 provision.go:143] copyHostCerts
	I1204 20:09:22.566103   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566138   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:09:22.566151   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566226   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:09:22.566301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566318   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:09:22.566325   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566349   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:09:22.566391   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566409   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:09:22.566415   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566442   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:09:22.566488   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m02 san=[127.0.0.1 192.168.39.216 ha-739930-m02 localhost minikube]
	I1204 20:09:22.637792   27912 provision.go:177] copyRemoteCerts
	I1204 20:09:22.637844   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:09:22.637865   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.640451   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.640844   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.640870   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.641017   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.641198   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.641358   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.641490   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:22.721358   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:09:22.721454   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:09:22.745038   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:09:22.745117   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:09:22.767198   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:09:22.767272   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:09:22.788710   27912 provision.go:87] duration metric: took 228.465669ms to configureAuth
	I1204 20:09:22.788740   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:09:22.788919   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:22.788987   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.791733   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792076   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.792099   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792317   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.792506   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792661   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.792909   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.793086   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.793106   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:09:23.010014   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:09:23.010040   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:09:23.010051   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetURL
	I1204 20:09:23.011214   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using libvirt version 6000000
	I1204 20:09:23.013200   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.013554   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013737   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:09:23.013756   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:09:23.013764   27912 client.go:171] duration metric: took 23.918317311s to LocalClient.Create
	I1204 20:09:23.013791   27912 start.go:167] duration metric: took 23.918385611s to libmachine.API.Create "ha-739930"
	I1204 20:09:23.013802   27912 start.go:293] postStartSetup for "ha-739930-m02" (driver="kvm2")
	I1204 20:09:23.013810   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:09:23.013826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.014037   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:09:23.014061   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.016336   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016674   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.016696   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.017001   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.017147   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.017302   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.098690   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:09:23.102672   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:09:23.102692   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:09:23.102751   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:09:23.102837   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:09:23.102850   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:09:23.102957   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:09:23.113316   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:23.137226   27912 start.go:296] duration metric: took 123.412538ms for postStartSetup
	I1204 20:09:23.137272   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:23.137827   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.140225   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140510   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.140539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140708   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:09:23.140912   27912 start.go:128] duration metric: took 24.063021139s to createHost
	I1204 20:09:23.140935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.143463   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143769   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.143788   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.144107   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144264   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144405   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.144585   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:23.144731   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:23.144740   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:09:23.251984   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342963.229753214
	
	I1204 20:09:23.252009   27912 fix.go:216] guest clock: 1733342963.229753214
	I1204 20:09:23.252019   27912 fix.go:229] Guest: 2024-12-04 20:09:23.229753214 +0000 UTC Remote: 2024-12-04 20:09:23.140925676 +0000 UTC m=+71.238297049 (delta=88.827538ms)
	I1204 20:09:23.252039   27912 fix.go:200] guest clock delta is within tolerance: 88.827538ms
	I1204 20:09:23.252046   27912 start.go:83] releasing machines lock for "ha-739930-m02", held for 24.174259167s
	I1204 20:09:23.252070   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.252303   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.254849   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.255234   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.255263   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.257539   27912 out.go:177] * Found network options:
	I1204 20:09:23.258745   27912 out.go:177]   - NO_PROXY=192.168.39.183
	W1204 20:09:23.259924   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.259962   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260454   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260610   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260694   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:09:23.260738   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	W1204 20:09:23.260771   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.260841   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:09:23.260863   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.263151   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263477   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.263505   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263671   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.263841   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.263988   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.263998   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.264025   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.264114   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.264181   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.264329   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.264459   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.264614   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.488607   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:09:23.493980   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:09:23.494034   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:09:23.509548   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:09:23.509575   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:09:23.509645   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:09:23.525800   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:09:23.539440   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:09:23.539502   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:09:23.552521   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:09:23.565606   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:09:23.684851   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:09:23.845149   27912 docker.go:233] disabling docker service ...
	I1204 20:09:23.845231   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:09:23.859120   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:09:23.871561   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:09:23.987397   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:09:24.126711   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:09:24.141506   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:09:24.159151   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:09:24.159228   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.170226   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:09:24.170291   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.182530   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.192731   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.202617   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:09:24.213736   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.224231   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.240767   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.251003   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:09:24.260142   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:09:24.260204   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:09:24.272434   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:09:24.282354   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:24.398398   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:09:24.487789   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:09:24.487861   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:09:24.492488   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:09:24.492560   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:09:24.496257   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:09:24.535274   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:09:24.535361   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.562604   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.590689   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:09:24.591986   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:09:24.593151   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:24.595599   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.595887   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:24.595916   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.596077   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:09:24.600001   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:24.611463   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:09:24.611643   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:24.611877   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.611903   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.627049   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1204 20:09:24.627459   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.627903   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.627928   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.628257   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.628473   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:09:24.629895   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:24.630233   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.630265   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.644758   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I1204 20:09:24.645209   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.645667   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.645685   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.645969   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.646125   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:24.646291   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.216
	I1204 20:09:24.646303   27912 certs.go:194] generating shared ca certs ...
	I1204 20:09:24.646316   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.646428   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:09:24.646465   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:09:24.646474   27912 certs.go:256] generating profile certs ...
	I1204 20:09:24.646544   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:09:24.646568   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e
	I1204 20:09:24.646583   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.254]
	I1204 20:09:24.766401   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e ...
	I1204 20:09:24.766431   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e: {Name:mkc714ddc3cd4c136e7a763dd7561d567af3f099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766597   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e ...
	I1204 20:09:24.766610   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e: {Name:mk0a2c7e9c0190313579e96374b5ec6b927ba043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766678   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:09:24.766802   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:09:24.766921   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:09:24.766936   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:09:24.766949   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:09:24.766968   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:09:24.766979   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:09:24.766989   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:09:24.767002   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:09:24.767010   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:09:24.767022   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:09:24.767067   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:09:24.767093   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:09:24.767102   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:09:24.767122   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:09:24.767144   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:09:24.767164   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:09:24.767200   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:24.767225   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:24.767238   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:09:24.767250   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:09:24.767278   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:24.770180   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770542   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:24.770570   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770712   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:24.770891   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:24.771044   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:24.771172   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:24.847687   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:09:24.853685   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:09:24.865057   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:09:24.869198   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:09:24.885878   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:09:24.889805   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:09:24.902654   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:09:24.906786   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:09:24.918187   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:09:24.922192   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:09:24.934730   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:09:24.938712   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:09:24.950279   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:09:24.974079   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:09:24.996598   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:09:25.018605   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:09:25.040436   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 20:09:25.062496   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:09:25.083915   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:09:25.105243   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:09:25.126515   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:09:25.148104   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:09:25.169580   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:09:25.190929   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:09:25.206338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:09:25.221317   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:09:25.236210   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:09:25.251125   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:09:25.266383   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:09:25.281338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:09:25.296542   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:09:25.302513   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:09:25.313596   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317903   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317952   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.323324   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:09:25.334576   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:09:25.344350   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348476   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348531   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.353851   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:09:25.364310   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:09:25.375701   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379775   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379825   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.385241   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:09:25.395365   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:09:25.399560   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:09:25.399615   27912 kubeadm.go:934] updating node {m02 192.168.39.216 8443 v1.31.2 crio true true} ...
	I1204 20:09:25.399711   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:09:25.399742   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:09:25.399777   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:09:25.415868   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:09:25.415924   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:09:25.415967   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.424465   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:09:25.424517   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.433122   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:09:25.433145   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433195   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433218   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 20:09:25.433242   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 20:09:25.437081   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:09:25.437107   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:09:26.186226   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.186313   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.190746   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:09:26.190822   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:09:26.419618   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:09:26.443488   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.443611   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.450947   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:09:26.450982   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:09:26.739349   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:09:26.748265   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:09:26.764007   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:09:26.780904   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:09:26.797527   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:09:26.801091   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:26.811509   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:26.923723   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:26.939490   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:26.939813   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:26.939861   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:26.954842   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I1204 20:09:26.955355   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:26.955871   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:26.955897   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:26.956236   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:26.956453   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:26.956610   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:09:26.956705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:09:26.956726   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:26.959547   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.959914   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:26.959939   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.960071   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:26.960221   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:26.960358   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:26.960492   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:27.110244   27912 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:27.110295   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I1204 20:09:48.018604   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (20.908287309s)
	I1204 20:09:48.018634   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:09:48.626365   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m02 minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:09:48.747614   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:09:48.847766   27912 start.go:319] duration metric: took 21.891152638s to joinCluster
	I1204 20:09:48.847828   27912 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:48.848176   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:48.849095   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:09:48.850328   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:49.112006   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:49.157177   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:09:49.157538   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:09:49.157630   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:09:49.157883   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:09:49.158009   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.158021   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.158035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.158045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.168058   27912 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1204 20:09:49.658898   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.658922   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.658932   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.658943   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.667464   27912 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 20:09:50.158380   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.158399   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.158413   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.158419   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.171364   27912 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1204 20:09:50.658199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.658226   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.658233   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.658237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.663401   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:09:51.159112   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.159137   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.159148   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.159156   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.162480   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:51.163075   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:51.658265   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.658294   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.658304   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.658310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.661298   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:52.158591   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.158614   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.158623   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.158627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.161933   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:52.658479   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.658500   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.658508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.658513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.661537   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.158361   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.158384   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.158394   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.158402   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.161578   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.658404   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.658425   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.658433   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.658437   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.661364   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:53.662003   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:54.158610   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.158635   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.158645   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.158651   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.162217   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:54.658074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.658094   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.658102   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.658106   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.661918   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.158589   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.158611   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.158619   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.158624   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.161786   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.658906   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.658929   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.658937   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.658941   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.662357   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.663184   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:56.158490   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.158517   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.158528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.158533   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.258326   27912 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I1204 20:09:56.658232   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.658254   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.658264   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.658270   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.661245   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:57.158358   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.158380   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.158388   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.158392   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.162043   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:57.658188   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.658212   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.658223   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.658232   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.661717   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.158679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.158701   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.158708   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.158713   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.162634   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.163161   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:58.658856   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.658882   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.658900   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.658907   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.662596   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.158835   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.158862   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.158873   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.158880   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.162669   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.658183   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.658215   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.658226   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.658231   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.661879   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.158851   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.158875   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.158883   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.158888   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.162790   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.163321   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:00.658562   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.658590   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.658601   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.658607   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.676721   27912 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 20:10:01.159007   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.159027   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.159035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.159038   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.162909   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:01.658124   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.658161   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.658184   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.658188   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.662301   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:02.158692   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.158716   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.158727   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.158732   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.162067   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:02.659042   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.659064   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.659071   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.659075   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.661911   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:02.662581   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:03.159115   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.159145   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.159158   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.159165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.162607   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:03.658246   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.658270   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.658278   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.658282   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.661511   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.158942   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.158970   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.158979   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.158983   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.161958   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:04.658955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.658979   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.658987   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.658991   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.662295   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.662958   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:05.158173   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.158194   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.158203   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.158207   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.161194   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:05.658134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.658157   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.658165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.658168   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.661616   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:06.158855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.158879   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.158887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.158891   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.164708   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:06.658461   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.658483   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.658491   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.658496   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.661810   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.158647   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.158674   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.158686   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.158690   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.161793   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.162345   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:07.658727   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.658752   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.658760   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.658764   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.661982   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.158999   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.159025   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.159037   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.159043   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.162388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.162849   27912 node_ready.go:49] node "ha-739930-m02" has status "Ready":"True"
	I1204 20:10:08.162868   27912 node_ready.go:38] duration metric: took 19.004941155s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:10:08.162878   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:08.162968   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:08.162977   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.162984   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.162987   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.167331   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:08.173856   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.173935   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:10:08.173944   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.173953   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.173958   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.176715   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.177374   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.177387   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.177395   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.177400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.179818   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.180446   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.180466   27912 pod_ready.go:82] duration metric: took 6.589083ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180478   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180546   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:10:08.180556   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.180569   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.180577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.183177   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.183821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.183836   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.183842   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.183847   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.186093   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.186600   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.186617   27912 pod_ready.go:82] duration metric: took 6.131706ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186628   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186691   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:10:08.186703   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.186713   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.186721   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.188940   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.189382   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.189398   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.189414   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.189420   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191367   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.191803   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.191818   27912 pod_ready.go:82] duration metric: took 5.18298ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191825   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191870   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:10:08.191877   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.191884   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.193844   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.194287   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.194299   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.194306   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.194310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.196400   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.196781   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.196797   27912 pod_ready.go:82] duration metric: took 4.966669ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.196810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.359125   27912 request.go:632] Waited for 162.263796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359211   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359219   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.359230   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.359237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.362569   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.559438   27912 request.go:632] Waited for 196.306856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559514   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559519   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.559526   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.559534   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.562128   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.562664   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.562679   27912 pod_ready.go:82] duration metric: took 365.86397ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.562689   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.759755   27912 request.go:632] Waited for 197.00165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759826   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.759834   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.759837   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.763106   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.959132   27912 request.go:632] Waited for 195.283542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959204   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.959212   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.959216   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.962369   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.962948   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.962965   27912 pod_ready.go:82] duration metric: took 400.270135ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.962974   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.159437   27912 request.go:632] Waited for 196.391636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159487   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159492   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.159502   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.159507   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.162708   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.359960   27912 request.go:632] Waited for 196.36752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360010   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360014   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.360022   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.360026   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.362729   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:09.363473   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.363492   27912 pod_ready.go:82] duration metric: took 400.512945ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.363502   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.559607   27912 request.go:632] Waited for 196.045629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559663   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559668   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.559676   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.559683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.563302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.759860   27912 request.go:632] Waited for 195.862174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759930   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759935   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.759943   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.759949   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.762988   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.763689   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.763715   27912 pod_ready.go:82] duration metric: took 400.20496ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.763729   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.959738   27912 request.go:632] Waited for 195.93307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959807   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959812   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.959819   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.959824   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.963156   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.159198   27912 request.go:632] Waited for 195.305905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159270   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159275   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.159283   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.159286   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.162529   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.163056   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.163074   27912 pod_ready.go:82] duration metric: took 399.337655ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.163084   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.359093   27912 request.go:632] Waited for 195.949947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359150   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359172   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.359182   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.359192   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.362392   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.559558   27912 request.go:632] Waited for 196.399776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559639   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559653   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.559664   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.559670   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.564370   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:10.564877   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.564896   27912 pod_ready.go:82] duration metric: took 401.805669ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.564906   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.759943   27912 request.go:632] Waited for 194.973279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760013   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.760021   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.760027   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.763726   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.959656   27912 request.go:632] Waited for 195.375986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959714   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959719   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.959726   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.959731   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.963524   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.964360   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.964375   27912 pod_ready.go:82] duration metric: took 399.464088ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.964389   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.159456   27912 request.go:632] Waited for 194.987845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159527   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159532   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.159539   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.159543   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.163395   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.359362   27912 request.go:632] Waited for 195.347282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359439   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359446   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.359458   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.359467   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.362635   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.363122   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:11.363138   27912 pod_ready.go:82] duration metric: took 398.74121ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.363148   27912 pod_ready.go:39] duration metric: took 3.200239096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:11.363164   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:10:11.363207   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:10:11.377015   27912 api_server.go:72] duration metric: took 22.529160197s to wait for apiserver process to appear ...
	I1204 20:10:11.377034   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:10:11.377052   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:10:11.380929   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:10:11.380976   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:10:11.380983   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.380999   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.381003   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.381838   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:10:11.381917   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:10:11.381931   27912 api_server.go:131] duration metric: took 4.890825ms to wait for apiserver health ...
	I1204 20:10:11.381937   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:10:11.559327   27912 request.go:632] Waited for 177.330525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559495   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.559519   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.559528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.566679   27912 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 20:10:11.572558   27912 system_pods.go:59] 17 kube-system pods found
	I1204 20:10:11.572586   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.572592   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.572597   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.572600   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.572604   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.572607   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.572612   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.572617   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.572623   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.572628   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.572635   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.572641   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.572646   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.572651   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.572655   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.572658   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.572661   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.572670   27912 system_pods.go:74] duration metric: took 190.727819ms to wait for pod list to return data ...
	I1204 20:10:11.572678   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:10:11.759027   27912 request.go:632] Waited for 186.27116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759095   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759100   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.759108   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.759113   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.763664   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:11.763867   27912 default_sa.go:45] found service account: "default"
	I1204 20:10:11.763882   27912 default_sa.go:55] duration metric: took 191.195892ms for default service account to be created ...
	I1204 20:10:11.763890   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:10:11.959431   27912 request.go:632] Waited for 195.47766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959540   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959553   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.959560   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.959566   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.965051   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:11.970022   27912 system_pods.go:86] 17 kube-system pods found
	I1204 20:10:11.970046   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.970051   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.970055   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.970059   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.970067   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.970071   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.970074   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.970078   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.970082   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.970088   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.970091   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.970095   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.970098   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.970100   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.970103   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.970106   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.970114   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.970124   27912 system_pods.go:126] duration metric: took 206.228874ms to wait for k8s-apps to be running ...
	I1204 20:10:11.970130   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:10:11.970170   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:11.984252   27912 system_svc.go:56] duration metric: took 14.113655ms WaitForService to wait for kubelet
	I1204 20:10:11.984285   27912 kubeadm.go:582] duration metric: took 23.13642897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:10:11.984305   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:10:12.159992   27912 request.go:632] Waited for 175.622844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160081   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:12.160088   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:12.160092   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:12.163352   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:12.164036   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164057   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164070   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164075   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164081   27912 node_conditions.go:105] duration metric: took 179.770433ms to run NodePressure ...
	I1204 20:10:12.164096   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:10:12.164129   27912 start.go:255] writing updated cluster config ...
	I1204 20:10:12.166221   27912 out.go:201] 
	I1204 20:10:12.167682   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:12.167793   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.169433   27912 out.go:177] * Starting "ha-739930-m03" control-plane node in "ha-739930" cluster
	I1204 20:10:12.170619   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:10:12.170641   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:10:12.170743   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:10:12.170758   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:10:12.170867   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.171047   27912 start.go:360] acquireMachinesLock for ha-739930-m03: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:10:12.171095   27912 start.go:364] duration metric: took 28.989µs to acquireMachinesLock for "ha-739930-m03"
	I1204 20:10:12.171119   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:12.171232   27912 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 20:10:12.172689   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:10:12.172776   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:12.172819   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:12.188562   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1204 20:10:12.189008   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:12.189520   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:12.189541   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:12.189894   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:12.190074   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:12.190188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:12.190394   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:10:12.190426   27912 client.go:168] LocalClient.Create starting
	I1204 20:10:12.190471   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:10:12.190508   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190530   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190598   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:10:12.190629   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190652   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190679   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:10:12.190691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .PreCreateCheck
	I1204 20:10:12.190909   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:12.191309   27912 main.go:141] libmachine: Creating machine...
	I1204 20:10:12.191322   27912 main.go:141] libmachine: (ha-739930-m03) Calling .Create
	I1204 20:10:12.191476   27912 main.go:141] libmachine: (ha-739930-m03) Creating KVM machine...
	I1204 20:10:12.192652   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing default KVM network
	I1204 20:10:12.192779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing private KVM network mk-ha-739930
	I1204 20:10:12.192908   27912 main.go:141] libmachine: (ha-739930-m03) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.192934   27912 main.go:141] libmachine: (ha-739930-m03) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:10:12.192988   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.192887   28697 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.193089   27912 main.go:141] libmachine: (ha-739930-m03) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:10:12.422847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.422708   28697 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa...
	I1204 20:10:12.571024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.570898   28697 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk...
	I1204 20:10:12.571065   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing magic tar header
	I1204 20:10:12.571083   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing SSH key tar header
	I1204 20:10:12.571096   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.571045   28697 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.571246   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03
	I1204 20:10:12.571291   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 (perms=drwx------)
	I1204 20:10:12.571302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:10:12.571314   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.571323   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:10:12.571331   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:10:12.571339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:10:12.571346   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home
	I1204 20:10:12.571354   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Skipping /home - not owner
	I1204 20:10:12.571391   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:10:12.571415   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:10:12.571432   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:10:12.571447   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:10:12.571458   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:10:12.571477   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:12.572409   27912 main.go:141] libmachine: (ha-739930-m03) define libvirt domain using xml: 
	I1204 20:10:12.572438   27912 main.go:141] libmachine: (ha-739930-m03) <domain type='kvm'>
	I1204 20:10:12.572449   27912 main.go:141] libmachine: (ha-739930-m03)   <name>ha-739930-m03</name>
	I1204 20:10:12.572461   27912 main.go:141] libmachine: (ha-739930-m03)   <memory unit='MiB'>2200</memory>
	I1204 20:10:12.572474   27912 main.go:141] libmachine: (ha-739930-m03)   <vcpu>2</vcpu>
	I1204 20:10:12.572480   27912 main.go:141] libmachine: (ha-739930-m03)   <features>
	I1204 20:10:12.572490   27912 main.go:141] libmachine: (ha-739930-m03)     <acpi/>
	I1204 20:10:12.572496   27912 main.go:141] libmachine: (ha-739930-m03)     <apic/>
	I1204 20:10:12.572505   27912 main.go:141] libmachine: (ha-739930-m03)     <pae/>
	I1204 20:10:12.572511   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572522   27912 main.go:141] libmachine: (ha-739930-m03)   </features>
	I1204 20:10:12.572529   27912 main.go:141] libmachine: (ha-739930-m03)   <cpu mode='host-passthrough'>
	I1204 20:10:12.572539   27912 main.go:141] libmachine: (ha-739930-m03)   
	I1204 20:10:12.572549   27912 main.go:141] libmachine: (ha-739930-m03)   </cpu>
	I1204 20:10:12.572577   27912 main.go:141] libmachine: (ha-739930-m03)   <os>
	I1204 20:10:12.572599   27912 main.go:141] libmachine: (ha-739930-m03)     <type>hvm</type>
	I1204 20:10:12.572612   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='cdrom'/>
	I1204 20:10:12.572622   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='hd'/>
	I1204 20:10:12.572630   27912 main.go:141] libmachine: (ha-739930-m03)     <bootmenu enable='no'/>
	I1204 20:10:12.572640   27912 main.go:141] libmachine: (ha-739930-m03)   </os>
	I1204 20:10:12.572648   27912 main.go:141] libmachine: (ha-739930-m03)   <devices>
	I1204 20:10:12.572659   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='cdrom'>
	I1204 20:10:12.572673   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/boot2docker.iso'/>
	I1204 20:10:12.572688   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hdc' bus='scsi'/>
	I1204 20:10:12.572708   27912 main.go:141] libmachine: (ha-739930-m03)       <readonly/>
	I1204 20:10:12.572721   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572747   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='disk'>
	I1204 20:10:12.572758   27912 main.go:141] libmachine: (ha-739930-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:10:12.572766   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk'/>
	I1204 20:10:12.572780   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hda' bus='virtio'/>
	I1204 20:10:12.572788   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572792   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572798   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='mk-ha-739930'/>
	I1204 20:10:12.572802   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572807   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572814   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572819   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='default'/>
	I1204 20:10:12.572825   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572842   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572860   27912 main.go:141] libmachine: (ha-739930-m03)     <serial type='pty'>
	I1204 20:10:12.572872   27912 main.go:141] libmachine: (ha-739930-m03)       <target port='0'/>
	I1204 20:10:12.572883   27912 main.go:141] libmachine: (ha-739930-m03)     </serial>
	I1204 20:10:12.572904   27912 main.go:141] libmachine: (ha-739930-m03)     <console type='pty'>
	I1204 20:10:12.572914   27912 main.go:141] libmachine: (ha-739930-m03)       <target type='serial' port='0'/>
	I1204 20:10:12.572922   27912 main.go:141] libmachine: (ha-739930-m03)     </console>
	I1204 20:10:12.572932   27912 main.go:141] libmachine: (ha-739930-m03)     <rng model='virtio'>
	I1204 20:10:12.572945   27912 main.go:141] libmachine: (ha-739930-m03)       <backend model='random'>/dev/random</backend>
	I1204 20:10:12.572957   27912 main.go:141] libmachine: (ha-739930-m03)     </rng>
	I1204 20:10:12.572965   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572973   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572983   27912 main.go:141] libmachine: (ha-739930-m03)   </devices>
	I1204 20:10:12.572991   27912 main.go:141] libmachine: (ha-739930-m03) </domain>
	I1204 20:10:12.572996   27912 main.go:141] libmachine: (ha-739930-m03) 
	I1204 20:10:12.580033   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:71:b7:c8 in network default
	I1204 20:10:12.580713   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring networks are active...
	I1204 20:10:12.580737   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:12.581680   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network default is active
	I1204 20:10:12.582031   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network mk-ha-739930 is active
	I1204 20:10:12.582464   27912 main.go:141] libmachine: (ha-739930-m03) Getting domain xml...
	I1204 20:10:12.583287   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:13.809969   27912 main.go:141] libmachine: (ha-739930-m03) Waiting to get IP...
	I1204 20:10:13.810804   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:13.811158   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:13.811215   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:13.811149   28697 retry.go:31] will retry after 211.474142ms: waiting for machine to come up
	I1204 20:10:14.024550   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.024996   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.025024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.024958   28697 retry.go:31] will retry after 355.071975ms: waiting for machine to come up
	I1204 20:10:14.381391   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.381825   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.381857   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.381781   28697 retry.go:31] will retry after 319.974042ms: waiting for machine to come up
	I1204 20:10:14.703466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.703910   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.703951   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.703877   28697 retry.go:31] will retry after 609.562735ms: waiting for machine to come up
	I1204 20:10:15.314561   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.315069   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.315101   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.315013   28697 retry.go:31] will retry after 486.973077ms: waiting for machine to come up
	I1204 20:10:15.803653   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.804185   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.804213   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.804126   28697 retry.go:31] will retry after 675.766149ms: waiting for machine to come up
	I1204 20:10:16.481967   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:16.482459   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:16.482489   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:16.482406   28697 retry.go:31] will retry after 1.174103834s: waiting for machine to come up
	I1204 20:10:17.658189   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:17.658580   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:17.658608   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:17.658533   28697 retry.go:31] will retry after 1.454065165s: waiting for machine to come up
	I1204 20:10:19.114276   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:19.114810   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:19.114839   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:19.114726   28697 retry.go:31] will retry after 1.181631433s: waiting for machine to come up
	I1204 20:10:20.297423   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:20.297826   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:20.297856   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:20.297775   28697 retry.go:31] will retry after 1.797113318s: waiting for machine to come up
	I1204 20:10:22.096493   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:22.096936   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:22.096963   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:22.096891   28697 retry.go:31] will retry after 2.640330643s: waiting for machine to come up
	I1204 20:10:24.740014   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:24.740549   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:24.740589   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:24.740509   28697 retry.go:31] will retry after 3.427854139s: waiting for machine to come up
	I1204 20:10:28.170039   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:28.170450   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:28.170480   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:28.170413   28697 retry.go:31] will retry after 3.100818386s: waiting for machine to come up
	I1204 20:10:31.273778   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:31.274339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:31.274370   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:31.274261   28697 retry.go:31] will retry after 5.17411421s: waiting for machine to come up
	I1204 20:10:36.453055   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453514   27912 main.go:141] libmachine: (ha-739930-m03) Found IP for machine: 192.168.39.176
	I1204 20:10:36.453546   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has current primary IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453554   27912 main.go:141] libmachine: (ha-739930-m03) Reserving static IP address...
	I1204 20:10:36.453982   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find host DHCP lease matching {name: "ha-739930-m03", mac: "52:54:00:8f:55:42", ip: "192.168.39.176"} in network mk-ha-739930
	I1204 20:10:36.527779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Getting to WaitForSSH function...
	I1204 20:10:36.527812   27912 main.go:141] libmachine: (ha-739930-m03) Reserved static IP address: 192.168.39.176
	I1204 20:10:36.527825   27912 main.go:141] libmachine: (ha-739930-m03) Waiting for SSH to be available...
	I1204 20:10:36.530460   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.530890   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.530918   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.531105   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH client type: external
	I1204 20:10:36.531134   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa (-rw-------)
	I1204 20:10:36.531171   27912 main.go:141] libmachine: (ha-739930-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:10:36.531193   27912 main.go:141] libmachine: (ha-739930-m03) DBG | About to run SSH command:
	I1204 20:10:36.531210   27912 main.go:141] libmachine: (ha-739930-m03) DBG | exit 0
	I1204 20:10:36.659229   27912 main.go:141] libmachine: (ha-739930-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 20:10:36.659536   27912 main.go:141] libmachine: (ha-739930-m03) KVM machine creation complete!
	I1204 20:10:36.659863   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:36.660403   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660622   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660802   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:10:36.660816   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetState
	I1204 20:10:36.662148   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:10:36.662160   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:10:36.662181   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:10:36.662187   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.664336   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664681   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.664694   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664829   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.664988   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665140   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665284   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.665446   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.665639   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.665651   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:10:36.774558   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:36.774575   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:10:36.774582   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.777253   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777655   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.777682   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777862   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.778048   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778224   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778333   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.778478   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.778662   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.778673   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:10:36.891601   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:10:36.891668   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:10:36.891681   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:10:36.891691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.891891   27912 buildroot.go:166] provisioning hostname "ha-739930-m03"
	I1204 20:10:36.891918   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.892100   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.894477   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.894866   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.894903   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.895026   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.895181   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895327   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895457   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.895582   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.895780   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.895798   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m03 && echo "ha-739930-m03" | sudo tee /etc/hostname
	I1204 20:10:37.022149   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m03
	
	I1204 20:10:37.022188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.024859   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.025324   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025555   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.025739   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.025923   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.026044   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.026196   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.026355   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.026371   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:10:37.143730   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:37.143754   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:10:37.143777   27912 buildroot.go:174] setting up certificates
	I1204 20:10:37.143788   27912 provision.go:84] configureAuth start
	I1204 20:10:37.143795   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:37.144053   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:37.146742   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147064   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.147095   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147234   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.149352   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149692   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.149719   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149832   27912 provision.go:143] copyHostCerts
	I1204 20:10:37.149875   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.149914   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:10:37.149926   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.150010   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:10:37.150120   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150164   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:10:37.150175   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150216   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:10:37.150301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150325   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:10:37.150331   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150367   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:10:37.150468   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m03 san=[127.0.0.1 192.168.39.176 ha-739930-m03 localhost minikube]
	I1204 20:10:37.504595   27912 provision.go:177] copyRemoteCerts
	I1204 20:10:37.504652   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:10:37.504676   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.507572   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.507995   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.508023   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.508251   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.508469   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.508628   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.508752   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:37.592737   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:10:37.592815   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:10:37.614702   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:10:37.614759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:10:37.636793   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:10:37.636856   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 20:10:37.657514   27912 provision.go:87] duration metric: took 513.715697ms to configureAuth
	I1204 20:10:37.657537   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:10:37.657776   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:37.657846   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.660375   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660716   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.660743   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660915   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.661101   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661283   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661394   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.661530   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.661715   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.661731   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:10:37.909620   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:10:37.909653   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:10:37.909661   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetURL
	I1204 20:10:37.911012   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using libvirt version 6000000
	I1204 20:10:37.913430   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913836   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.913865   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913996   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:10:37.914009   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:10:37.914014   27912 client.go:171] duration metric: took 25.723578899s to LocalClient.Create
	I1204 20:10:37.914034   27912 start.go:167] duration metric: took 25.723643031s to libmachine.API.Create "ha-739930"
	I1204 20:10:37.914045   27912 start.go:293] postStartSetup for "ha-739930-m03" (driver="kvm2")
	I1204 20:10:37.914058   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:10:37.914082   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:37.914308   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:10:37.914329   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.916698   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917013   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.917037   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917163   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.917355   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.917507   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.917647   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.000720   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:10:38.004659   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:10:38.004677   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:10:38.004732   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:10:38.004797   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:10:38.004805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:10:38.004881   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:10:38.014138   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:38.035007   27912 start.go:296] duration metric: took 120.952939ms for postStartSetup
	I1204 20:10:38.035043   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:38.035625   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.038045   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038404   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.038431   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038707   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:38.038928   27912 start.go:128] duration metric: took 25.86768393s to createHost
	I1204 20:10:38.038955   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.040921   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041241   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.041260   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041384   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.041567   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041725   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041870   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.042033   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:38.042234   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:38.042247   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:10:38.147467   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343038.125898138
	
	I1204 20:10:38.147487   27912 fix.go:216] guest clock: 1733343038.125898138
	I1204 20:10:38.147494   27912 fix.go:229] Guest: 2024-12-04 20:10:38.125898138 +0000 UTC Remote: 2024-12-04 20:10:38.038942767 +0000 UTC m=+146.136314147 (delta=86.955371ms)
	I1204 20:10:38.147507   27912 fix.go:200] guest clock delta is within tolerance: 86.955371ms
	I1204 20:10:38.147511   27912 start.go:83] releasing machines lock for "ha-739930-m03", held for 25.976405222s
	I1204 20:10:38.147527   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.147758   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.150388   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.150780   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.150809   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.153038   27912 out.go:177] * Found network options:
	I1204 20:10:38.154623   27912 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.216
	W1204 20:10:38.155949   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.155970   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.155981   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156494   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156668   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156762   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:10:38.156817   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	W1204 20:10:38.156874   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.156896   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.156981   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:10:38.157003   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.159414   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159669   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159823   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.159847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159966   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160094   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.160122   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.160127   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160279   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160293   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160410   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.160424   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160525   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160650   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.394150   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:10:38.401145   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:10:38.401209   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:10:38.417195   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:10:38.417223   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:10:38.417296   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:10:38.435131   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:10:38.448563   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:10:38.448618   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:10:38.461725   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:10:38.474727   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:10:38.588798   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:10:38.745587   27912 docker.go:233] disabling docker service ...
	I1204 20:10:38.745653   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:10:38.759235   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:10:38.771608   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:10:38.877832   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:10:38.982502   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:10:38.995491   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:10:39.012043   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:10:39.012100   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.021299   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:10:39.021358   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.030541   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.039631   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.048551   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:10:39.058773   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.068061   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.083733   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.092600   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:10:39.101297   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:10:39.101340   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:10:39.113156   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:10:39.122303   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:39.227598   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:10:39.312250   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:10:39.312323   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:10:39.316600   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:10:39.316650   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:10:39.320258   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:10:39.357732   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:10:39.357795   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.390225   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.419008   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:10:39.420400   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:10:39.421790   27912 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.216
	I1204 20:10:39.423169   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:39.425979   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426437   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:39.426466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426672   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:10:39.431086   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:39.443488   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:10:39.443719   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:39.443987   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.444059   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.459062   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I1204 20:10:39.459454   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.459962   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.459982   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.460287   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.460468   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:10:39.462100   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:39.462434   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.462472   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.476580   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I1204 20:10:39.476947   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.477280   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.477302   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.477596   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.477759   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:39.477901   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.176
	I1204 20:10:39.477913   27912 certs.go:194] generating shared ca certs ...
	I1204 20:10:39.477926   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.478032   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:10:39.478067   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:10:39.478076   27912 certs.go:256] generating profile certs ...
	I1204 20:10:39.478140   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:10:39.478162   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8
	I1204 20:10:39.478183   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:10:39.647686   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 ...
	I1204 20:10:39.647712   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8: {Name:mka45902bb26beb0e72f217dc87741ab3309d928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.647887   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 ...
	I1204 20:10:39.647910   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8: {Name:mk0280d80935ba52cb98acc5d6236d25a3a3095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.648008   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:10:39.648187   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:10:39.648361   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:10:39.648383   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:10:39.648403   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:10:39.648422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:10:39.648440   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:10:39.648458   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:10:39.648475   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:10:39.648493   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:10:39.663476   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:10:39.663545   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:10:39.663584   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:10:39.663595   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:10:39.663616   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:10:39.663649   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:10:39.663681   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:10:39.663737   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:39.663769   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:39.663786   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:10:39.663805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:10:39.663843   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:39.666431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666764   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:39.666781   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666946   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:39.667122   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:39.667283   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:39.667442   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:39.739814   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:10:39.744522   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:10:39.755922   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:10:39.759927   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:10:39.770702   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:10:39.775183   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:10:39.787784   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:10:39.792674   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:10:39.805368   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:10:39.809503   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:10:39.828088   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:10:39.832824   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:10:39.844859   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:10:39.869334   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:10:39.893785   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:10:39.916818   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:10:39.939176   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 20:10:39.961163   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:10:39.983006   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:10:40.005681   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:10:40.028546   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:10:40.051809   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:10:40.074413   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:10:40.097808   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:10:40.113924   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:10:40.131147   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:10:40.149216   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:10:40.166655   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:10:40.182489   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:10:40.200001   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:10:40.221223   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:10:40.226405   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:10:40.235863   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239603   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239672   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.245186   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:10:40.256188   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:10:40.266724   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271086   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271119   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.276304   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:10:40.286222   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:10:40.297060   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301192   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301236   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.307282   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:10:40.317487   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:10:40.320982   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:10:40.321045   27912 kubeadm.go:934] updating node {m03 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1204 20:10:40.321144   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:10:40.321175   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:10:40.321208   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:10:40.335360   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:10:40.335431   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:10:40.335468   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.344356   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:10:40.344387   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.352481   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:10:40.352490   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 20:10:40.352500   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352520   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 20:10:40.352529   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:40.352538   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.352555   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352614   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.357211   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:10:40.357232   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:10:40.373861   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:10:40.373888   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:10:40.393917   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.394019   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.435438   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:10:40.435480   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:10:41.204864   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:10:41.214084   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:10:41.230130   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:10:41.245590   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:10:41.261184   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:10:41.264917   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:41.276834   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:41.407860   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:10:41.425834   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:41.426358   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:41.426432   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:41.444259   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I1204 20:10:41.444841   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:41.445793   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:41.445819   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:41.446152   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:41.446372   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:41.446554   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:10:41.446705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:10:41.446730   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:41.449938   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450354   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:41.450382   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450525   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:41.450704   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:41.450893   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:41.451051   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:41.603198   27912 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:41.603245   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443"
	I1204 20:11:02.285051   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443": (20.681780468s)
	I1204 20:11:02.285099   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:11:02.929343   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m03 minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:11:03.053541   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:11:03.177213   27912 start.go:319] duration metric: took 21.7306554s to joinCluster
	I1204 20:11:03.177299   27912 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:11:03.177647   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:11:03.178583   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:11:03.179869   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:11:03.436285   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:11:03.491544   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:11:03.491892   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:11:03.491978   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:11:03.492270   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:03.492369   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.492380   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.492391   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.492400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.496740   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:03.992695   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.992717   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.992725   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.992729   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.996010   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.493230   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.493265   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.493272   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.496716   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.992539   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.992561   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.992571   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.992577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.995936   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:05.493273   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.493300   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.493311   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.493317   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.497413   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:05.497897   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:05.993362   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.993385   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.993392   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.993397   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.996675   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.492587   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.492610   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.492620   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.492627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.495773   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.993310   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.993331   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.993339   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.993343   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.996864   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.492704   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.492741   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.492750   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.492754   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.496418   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.993375   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.993397   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.993404   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.993414   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.996601   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.997248   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:08.492707   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.492739   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.492752   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.492757   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.498736   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:08.992522   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.992546   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.992554   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.992559   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.996681   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:09.492442   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.492462   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.492470   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.492475   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.496143   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:09.992900   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.992932   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.992939   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.992944   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.996453   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.492481   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.492499   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.492507   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.492513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.496234   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.497174   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:10.992502   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.992525   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.992532   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.992553   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.995639   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.493014   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.493034   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.493042   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.493045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.496066   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.992460   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.992481   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.992488   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.992492   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.995782   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.492536   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.492559   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.492567   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.492575   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.496512   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.993486   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.993507   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.993515   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.993521   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.996929   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.997503   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:13.492705   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.492728   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.492735   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.492739   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.495958   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:13.993195   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.993235   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.993243   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.993248   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.996458   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:14.492667   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.492687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.492695   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.492700   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.496760   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:14.992634   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.992657   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.992665   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.992668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.996174   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.492623   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.492645   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.492651   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.492656   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.496189   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.496993   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:15.993412   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.993432   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.993438   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.993442   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.996343   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:16.492477   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.492500   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.492508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.492512   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.495796   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:16.993504   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.993533   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.993545   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.993552   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.996589   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.492614   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.492637   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.492649   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.492654   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.496032   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.992928   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.992951   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.992958   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.992961   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.996749   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.997385   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:18.492596   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.492617   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.492625   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.492629   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.495562   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:18.992579   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.992604   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.992612   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.992616   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.996070   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.493093   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.493113   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.493121   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.493126   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.992762   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.992788   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.992796   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.992802   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.996757   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.997645   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:20.493018   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.493038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.493045   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.493049   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.496165   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:20.993181   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.993203   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.993211   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.993214   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.996266   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.493006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.493035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.493044   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.493050   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.497703   27912 node_ready.go:49] node "ha-739930-m03" has status "Ready":"True"
	I1204 20:11:21.497723   27912 node_ready.go:38] duration metric: took 18.005431822s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:21.497731   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:21.497795   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:21.497804   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.497811   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.497815   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.504465   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:21.510955   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.511029   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:11:21.511038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.511050   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.511058   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.514034   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.514600   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.514614   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.514622   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.514627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.517241   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.517672   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.517688   27912 pod_ready.go:82] duration metric: took 6.709809ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517707   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517765   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:11:21.517772   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.517781   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.517791   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.520563   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.521278   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.521296   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.521307   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.521313   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.523869   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.524405   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.524426   27912 pod_ready.go:82] duration metric: took 6.708809ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524435   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524489   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:11:21.524498   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.524504   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.524510   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.526682   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.527365   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.527393   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.527401   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.527410   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530023   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.530721   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.530744   27912 pod_ready.go:82] duration metric: took 6.30261ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530758   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530832   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:11:21.530844   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.530856   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530866   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.533485   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.534074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:21.534089   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.534098   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.534104   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.536315   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.536771   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.536789   27912 pod_ready.go:82] duration metric: took 6.023339ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.536798   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.693086   27912 request.go:632] Waited for 156.229013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693178   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693187   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.693199   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.693211   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.696805   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.893066   27912 request.go:632] Waited for 195.292666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893122   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893140   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.893148   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.893151   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.896289   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.896776   27912 pod_ready.go:93] pod "etcd-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.896798   27912 pod_ready.go:82] duration metric: took 359.993172ms for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.896822   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.094080   27912 request.go:632] Waited for 197.155628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094159   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094178   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.094195   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.094201   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.097388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.293809   27912 request.go:632] Waited for 194.988533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293864   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293871   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.293881   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.293886   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.297036   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.297688   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.297708   27912 pod_ready.go:82] duration metric: took 400.873563ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.297721   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.493772   27912 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493834   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493840   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.493847   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.493850   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.497525   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.693745   27912 request.go:632] Waited for 195.318737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693830   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693837   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.693844   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.693849   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.697438   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.697941   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.697959   27912 pod_ready.go:82] duration metric: took 400.231011ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.697969   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.894031   27912 request.go:632] Waited for 195.997225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894100   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894105   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.894113   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.894119   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.896928   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.093056   27912 request.go:632] Waited for 195.290507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093109   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093116   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.093125   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.093131   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.096071   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.096675   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.096695   27912 pod_ready.go:82] duration metric: took 398.72057ms for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.096706   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.293761   27912 request.go:632] Waited for 196.979038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293857   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293863   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.293870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.293877   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.297313   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.493595   27912 request.go:632] Waited for 195.358893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493645   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493652   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.493662   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.493668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.496860   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.497431   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.497447   27912 pod_ready.go:82] duration metric: took 400.733171ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.497457   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.693609   27912 request.go:632] Waited for 196.087422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693665   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693670   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.693677   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.693681   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.697816   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:23.893073   27912 request.go:632] Waited for 194.284611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893157   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.893173   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.893179   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.896273   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.896905   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.896921   27912 pod_ready.go:82] duration metric: took 399.455915ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.896931   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.094047   27912 request.go:632] Waited for 197.05537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094114   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094120   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.094128   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.094138   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.097347   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.293333   27912 request.go:632] Waited for 195.221509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293408   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293418   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.293429   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.293439   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.296348   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:24.296803   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.296819   27912 pod_ready.go:82] duration metric: took 399.882093ms for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.296828   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.493904   27912 request.go:632] Waited for 197.016726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493960   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.493967   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.493971   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.497694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.693075   27912 request.go:632] Waited for 194.571912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693130   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693135   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.693142   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.693146   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.696302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.696899   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.696919   27912 pod_ready.go:82] duration metric: took 400.084608ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.696928   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.893931   27912 request.go:632] Waited for 196.931451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894022   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.894043   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.894046   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.897046   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.093243   27912 request.go:632] Waited for 195.305694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093305   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093310   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.093318   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.093321   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.096337   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.096835   27912 pod_ready.go:93] pod "kube-proxy-r4895" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.096854   27912 pod_ready.go:82] duration metric: took 399.920087ms for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.096864   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.294085   27912 request.go:632] Waited for 197.134763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294155   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294164   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.294174   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.294181   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.297688   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.493811   27912 request.go:632] Waited for 195.37479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493896   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493902   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.493910   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.493914   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.497035   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.497776   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.497796   27912 pod_ready.go:82] duration metric: took 400.925065ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.497810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.693786   27912 request.go:632] Waited for 195.910848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693860   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.693866   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.693870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.697283   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.893336   27912 request.go:632] Waited for 195.363737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893392   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893398   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.893407   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.893417   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.896883   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.897527   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.897547   27912 pod_ready.go:82] duration metric: took 399.728095ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.897560   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.093716   27912 request.go:632] Waited for 196.07568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093770   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093775   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.093783   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.093787   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.097490   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:26.293677   27912 request.go:632] Waited for 195.380903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293724   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293729   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.293736   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.293740   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.296374   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.297059   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.297083   27912 pod_ready.go:82] duration metric: took 399.512498ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.297096   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.493619   27912 request.go:632] Waited for 196.449368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.493698   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.493708   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.496613   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.693570   27912 request.go:632] Waited for 196.314375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693652   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693664   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.693674   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.693683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.696474   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.697001   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.697020   27912 pod_ready.go:82] duration metric: took 399.916866ms for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.697032   27912 pod_ready.go:39] duration metric: took 5.199290508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:26.697048   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:11:26.697102   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:11:26.712884   27912 api_server.go:72] duration metric: took 23.535549754s to wait for apiserver process to appear ...
	I1204 20:11:26.712900   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:11:26.712916   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:11:26.717076   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:11:26.717125   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:11:26.717134   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.717141   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.717145   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.718054   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:11:26.718141   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:11:26.718158   27912 api_server.go:131] duration metric: took 5.25178ms to wait for apiserver health ...
	I1204 20:11:26.718165   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:11:26.893379   27912 request.go:632] Waited for 175.13636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893459   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.893466   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.893472   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.899023   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:26.905500   27912 system_pods.go:59] 24 kube-system pods found
	I1204 20:11:26.905525   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:26.905530   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:26.905534   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:26.905538   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:26.905541   27912 system_pods.go:61] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:26.905545   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:26.905548   27912 system_pods.go:61] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:26.905550   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:26.905554   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:26.905558   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:26.905564   27912 system_pods.go:61] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:26.905569   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:26.905574   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:26.905579   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:26.905588   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:26.905593   27912 system_pods.go:61] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:26.905602   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:26.905607   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:26.905612   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:26.905619   27912 system_pods.go:61] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:26.905622   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:26.905626   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:26.905630   27912 system_pods.go:61] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:26.905634   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:26.905640   27912 system_pods.go:74] duration metric: took 187.469575ms to wait for pod list to return data ...
	I1204 20:11:26.905660   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:11:27.093927   27912 request.go:632] Waited for 188.174644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093986   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093991   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.093998   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.094011   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.097761   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.097902   27912 default_sa.go:45] found service account: "default"
	I1204 20:11:27.097922   27912 default_sa.go:55] duration metric: took 192.253848ms for default service account to be created ...
	I1204 20:11:27.097933   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:11:27.293645   27912 request.go:632] Waited for 195.638628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293720   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293727   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.293736   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.293742   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.299871   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:27.306654   27912 system_pods.go:86] 24 kube-system pods found
	I1204 20:11:27.306676   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:27.306682   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:27.306686   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:27.306689   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:27.306692   27912 system_pods.go:89] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:27.306696   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:27.306699   27912 system_pods.go:89] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:27.306702   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:27.306705   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:27.306709   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:27.306714   27912 system_pods.go:89] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:27.306719   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:27.306724   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:27.306733   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:27.306742   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:27.306748   27912 system_pods.go:89] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:27.306756   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:27.306762   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:27.306770   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:27.306774   27912 system_pods.go:89] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:27.306780   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:27.306784   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:27.306787   27912 system_pods.go:89] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:27.306790   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:27.306796   27912 system_pods.go:126] duration metric: took 208.857473ms to wait for k8s-apps to be running ...
	I1204 20:11:27.306805   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:11:27.306853   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:11:27.321782   27912 system_svc.go:56] duration metric: took 14.969542ms WaitForService to wait for kubelet
	I1204 20:11:27.321804   27912 kubeadm.go:582] duration metric: took 24.144472529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:11:27.321820   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:11:27.493192   27912 request.go:632] Waited for 171.286703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493250   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.493262   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.493266   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.497192   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.498227   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498244   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498254   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498259   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498262   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498265   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498269   27912 node_conditions.go:105] duration metric: took 176.444491ms to run NodePressure ...
	I1204 20:11:27.498283   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:11:27.498303   27912 start.go:255] writing updated cluster config ...
	I1204 20:11:27.498580   27912 ssh_runner.go:195] Run: rm -f paused
	I1204 20:11:27.549391   27912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 20:11:27.551427   27912 out.go:177] * Done! kubectl is now configured to use "ha-739930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.192688471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343317192662001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ebbe2d-d43d-4933-b10e-457a9b2b2c41 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.193334391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=583e6ceb-ceb3-4054-9444-fd96cb02dec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.193384455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=583e6ceb-ceb3-4054-9444-fd96cb02dec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.193665161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=583e6ceb-ceb3-4054-9444-fd96cb02dec7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.229351740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93f02ab6-32f4-40d2-8d9a-57b07f447b35 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.229426157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93f02ab6-32f4-40d2-8d9a-57b07f447b35 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.230557897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808a2425-093c-4cbc-81f5-dc3488cd356f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.231157970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343317231134420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808a2425-093c-4cbc-81f5-dc3488cd356f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.231814643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6496c782-1533-41b5-aaf2-790cd6e43b8c name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.231869317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6496c782-1533-41b5-aaf2-790cd6e43b8c name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.232089402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6496c782-1533-41b5-aaf2-790cd6e43b8c name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.267038965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3284ebd-e50b-4dac-bedc-0207e2c0ba48 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.267129857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3284ebd-e50b-4dac-bedc-0207e2c0ba48 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.268075926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52e6d8b6-942d-44ce-9b98-747310a1e214 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.268542031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343317268521447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52e6d8b6-942d-44ce-9b98-747310a1e214 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.269115533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72cffef8-6ae7-428c-8c23-ac3a38f4a0ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.269164387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72cffef8-6ae7-428c-8c23-ac3a38f4a0ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.269393763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72cffef8-6ae7-428c-8c23-ac3a38f4a0ba name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.305431776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b43e462-d725-4ef1-92a1-4d5935ee16a6 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.305508733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b43e462-d725-4ef1-92a1-4d5935ee16a6 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.306608453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fac0076-23f9-40e6-9df5-dfa45a29fac7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.307251074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343317307223060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fac0076-23f9-40e6-9df5-dfa45a29fac7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.307934656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2254cf17-0ad0-4faa-8e5d-12bdd19478c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.307987197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2254cf17-0ad0-4faa-8e5d-12bdd19478c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:17 ha-739930 crio[665]: time="2024-12-04 20:15:17.308221855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2254cf17-0ad0-4faa-8e5d-12bdd19478c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c09d55fbc3f94       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8470389e19e5b       busybox-7dff88458-gg7dr
	92f0436c068d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   fdd28652924af       coredns-7c65d6cfc9-7kbgr
	ab16b32e60a72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a639b811aff3b       coredns-7c65d6cfc9-8kztf
	a1496ef67bc6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   235aa20e54db7       storage-provisioner
	f38276fe657c7       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   22f273a6fc170       kindnet-8wsgw
	8643b775b5352       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   30611e2a6fdcc       kube-proxy-tlhfv
	b4a22468ef5bd       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   5f8113a27db24       kube-vip-ha-739930
	325ac1400e34a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a0e82c5e83a21       kube-scheduler-ha-739930
	1fdab5e7f0c11       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   83caff9199eb8       kube-apiserver-ha-739930
	52571ff875ebe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   91df0913316d5       etcd-ha-739930
	c2343748d9b3c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   bccd9e2c06872       kube-controller-manager-ha-739930
	
	
	==> coredns [92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50] <==
	[INFO] 10.244.1.2:60420 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000998s
	[INFO] 10.244.2.2:43602 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198643s
	[INFO] 10.244.2.2:55688 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004203463s
	[INFO] 10.244.2.2:58147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017975s
	[INFO] 10.244.0.4:34390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142716s
	[INFO] 10.244.0.4:33345 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126491s
	[INFO] 10.244.1.2:52771 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534902s
	[INFO] 10.244.1.2:50377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155393s
	[INFO] 10.244.1.2:57617 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204758s
	[INFO] 10.244.1.2:33315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087548s
	[INFO] 10.244.1.2:43721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138913s
	[INFO] 10.244.2.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128945s
	[INFO] 10.244.2.2:39846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141449s
	[INFO] 10.244.0.4:49972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079931s
	[INFO] 10.244.0.4:54249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163883s
	[INFO] 10.244.1.2:50096 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116516s
	[INFO] 10.244.1.2:45073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132387s
	[INFO] 10.244.2.2:49399 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153554s
	[INFO] 10.244.2.2:59645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182375s
	[INFO] 10.244.0.4:58720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128913s
	[INFO] 10.244.0.4:43247 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014397s
	[INFO] 10.244.0.4:41555 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088414s
	[INFO] 10.244.0.4:43722 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065939s
	[INFO] 10.244.1.2:45770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102411s
	[INFO] 10.244.1.2:50474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112012s
	
	
	==> coredns [ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac] <==
	[INFO] 10.244.1.2:40314 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016375s
	[INFO] 10.244.2.2:49280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000323723s
	[INFO] 10.244.2.2:39711 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206446s
	[INFO] 10.244.2.2:58438 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003929293s
	[INFO] 10.244.2.2:51399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159908s
	[INFO] 10.244.2.2:39775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142713s
	[INFO] 10.244.0.4:59240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001795102s
	[INFO] 10.244.0.4:58038 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108734s
	[INFO] 10.244.0.4:54479 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222678s
	[INFO] 10.244.0.4:48445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001109511s
	[INFO] 10.244.0.4:56707 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120069s
	[INFO] 10.244.0.4:44194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:36003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139108s
	[INFO] 10.244.1.2:48175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090843s
	[INFO] 10.244.1.2:54736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072028s
	[INFO] 10.244.2.2:41244 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110768s
	[INFO] 10.244.2.2:58717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088169s
	[INFO] 10.244.0.4:52576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161976s
	[INFO] 10.244.0.4:50935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010896s
	[INFO] 10.244.1.2:40433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160052s
	[INFO] 10.244.1.2:48574 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094093s
	[INFO] 10.244.2.2:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131379s
	[INFO] 10.244.2.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000289898s
	[INFO] 10.244.1.2:59160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148396s
	[INFO] 10.244.1.2:49691 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140675s
	
	
	==> describe nodes <==
	Name:               ha-739930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:08:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-739930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a862467bfb34c3ba59a1a6944c8e8ad
	  System UUID:                4a862467-bfb3-4c3b-a59a-1a6944c8e8ad
	  Boot ID:                    88a12a5a-b072-479a-8944-b6767cbdf4f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gg7dr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-7kbgr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-7c65d6cfc9-8kztf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-739930                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m24s
	  kube-system                 kindnet-8wsgw                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-739930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-739930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-proxy-tlhfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-739930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-vip-ha-739930                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-739930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-739930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-739930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m20s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  NodeReady                6m4s   kubelet          Node ha-739930 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	
	
	Name:               ha-739930-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:09:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:12:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-739930-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 309500ff1508404f8337a542897e4a63
	  System UUID:                309500ff-1508-404f-8337-a542897e4a63
	  Boot ID:                    abc62bfe-1148-4265-a781-5ad8762ade09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kx56q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-739930-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m29s
	  kube-system                 kindnet-z6v65                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m31s
	  kube-system                 kube-apiserver-ha-739930-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-739930-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-gtw7d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-ha-739930-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-739930-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node ha-739930-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  NodeNotReady             115s                   node-controller  Node ha-739930-m02 status is now: NodeNotReady
	
	
	Name:               ha-739930-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:11:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-739930-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eddf849e101457c8f603f9f7bb068e3
	  System UUID:                7eddf849-e101-457c-8f60-3f9f7bb068e3
	  Boot ID:                    94b82cc0-8208-45bb-85df-9fba3000dbef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9pz7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-739930-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-d2rvr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-739930-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-ha-739930-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-r4895                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-739930-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-vip-ha-739930-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m18s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m18s)  kubelet          Node ha-739930-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m18s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	
	
	Name:               ha-739930-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_12_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:12:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-739930-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caea6c34853a432f8606c2c81d5d7e80
	  System UUID:                caea6c34-853a-432f-8606-c2c81d5d7e80
	  Boot ID:                    64cbf16d-0924-4d4e-bb2e-e3fb57ad6cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2l856       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-2dnzj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m7s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m13s)  kubelet          Node ha-739930-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m13s)  kubelet          Node ha-739930-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m13s)  kubelet          Node ha-739930-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  NodeReady                2m52s (x2 over 2m52s)  kubelet          Node ha-739930-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038376] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818831] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961468] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.569504] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.583210] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.060308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060487] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.188680] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114168] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.247975] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.760825] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.102978] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066053] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.507773] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085425] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.435723] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 4 20:09] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.420810] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246] <==
	{"level":"warn","ts":"2024-12-04T20:15:17.508591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.524871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.548043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.556744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.560638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.573940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.580174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.585816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.589491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.592184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.597466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.603614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.609077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.609153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.612275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.614903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.620306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.625973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.631591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.634594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.637021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.640084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.645107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.650854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:17.709070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:17 up 7 min,  0 users,  load average: 0.36, 0.27, 0.12
	Linux ha-739930 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7] <==
	I1204 20:14:42.876632       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:14:52.876836       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:14:52.876889       1 main.go:301] handling current node
	I1204 20:14:52.876924       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:14:52.876933       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:14:52.877263       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:14:52.877287       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:14:52.877494       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:14:52.877511       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:02.869044       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:02.869284       1 main.go:301] handling current node
	I1204 20:15:02.869336       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:02.869343       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:02.869633       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:02.869654       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:02.869898       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:02.869919       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:12.876661       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:12.876725       1 main.go:301] handling current node
	I1204 20:15:12.876787       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:12.876795       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:12.877118       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:12.877138       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:12.877303       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:12.877319       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3] <==
	I1204 20:08:52.109573       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 20:08:52.115869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1204 20:08:52.116893       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 20:08:52.120949       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:08:52.319935       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 20:08:53.401361       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 20:08:53.418287       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 20:08:53.427159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 20:08:57.975080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 20:08:58.071170       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1204 20:11:33.595040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51898: use of closed network connection
	E1204 20:11:33.787246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51926: use of closed network connection
	E1204 20:11:33.961220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51944: use of closed network connection
	E1204 20:11:34.139353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51958: use of closed network connection
	E1204 20:11:34.492487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51978: use of closed network connection
	E1204 20:11:34.660669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51994: use of closed network connection
	E1204 20:11:34.825641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52014: use of closed network connection
	E1204 20:11:35.000850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52034: use of closed network connection
	E1204 20:11:35.295050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52074: use of closed network connection
	E1204 20:11:35.467188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52090: use of closed network connection
	E1204 20:11:35.632176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52096: use of closed network connection
	E1204 20:11:35.802340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52124: use of closed network connection
	E1204 20:11:35.976054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52130: use of closed network connection
	E1204 20:11:36.156331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52148: use of closed network connection
	W1204 20:13:02.138009       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.176 192.168.39.183]
	
	
	==> kube-controller-manager [c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4] <==
	I1204 20:12:05.098063       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-739930-m04" podCIDRs=["10.244.3.0/24"]
	I1204 20:12:05.098353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.099501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.129202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.212844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.605704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:07.219432       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-739930-m04"
	I1204 20:12:07.250173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:08.816441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.034862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.114294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.193601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:15.131792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.187809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:12:25.187897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.200602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:27.234376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:35.291257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:13:22.261174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:13:22.262013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.294239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.349815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.422518ms"
	I1204 20:13:22.353121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.184µs"
	I1204 20:13:23.918547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:27.468391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	
	
	==> kube-proxy [8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:08:59.055359       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:08:59.074919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1204 20:08:59.075054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:08:59.106971       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:08:59.107053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:08:59.107091       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:08:59.110117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:08:59.110853       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:08:59.110911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:08:59.113929       1 config.go:328] "Starting node config controller"
	I1204 20:08:59.113988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:08:59.114597       1 config.go:199] "Starting service config controller"
	I1204 20:08:59.114621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:08:59.114931       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:08:59.114959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:08:59.214563       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:08:59.215004       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:08:59.216196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7] <==
	E1204 20:08:51.687075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.698835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 20:08:51.698950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.756911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 20:08:51.757061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.761020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:08:51.761159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 20:08:54.377656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:11:28.468555       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e79c51d4-80e5-490b-906e-e376195d820e" pod="default/busybox-7dff88458-4zmkp" assumedNode="ha-739930-m02" currentNode="ha-739930-m03"
	E1204 20:11:28.510519       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m03"
	E1204 20:11:28.510990       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e79c51d4-80e5-490b-906e-e376195d820e(default/busybox-7dff88458-4zmkp) was assumed on ha-739930-m03 but assigned to ha-739930-m02" pod="default/busybox-7dff88458-4zmkp"
	E1204 20:11:28.511176       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" pod="default/busybox-7dff88458-4zmkp"
	I1204 20:11:28.511316       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m02"
	I1204 20:11:28.544933       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="5411c4b8-6cb8-493d-8ce1-adcf557c68bc" pod="default/busybox-7dff88458-b94b5" assumedNode="ha-739930" currentNode="ha-739930-m03"
	E1204 20:11:28.557489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-b94b5" node="ha-739930-m03"
	E1204 20:11:28.557560       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5411c4b8-6cb8-493d-8ce1-adcf557c68bc(default/busybox-7dff88458-b94b5) was assumed on ha-739930-m03 but assigned to ha-739930" pod="default/busybox-7dff88458-b94b5"
	E1204 20:11:28.557587       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-b94b5"
	I1204 20:11:28.557614       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-b94b5" node="ha-739930"
	E1204 20:11:30.014314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:11:30.014481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5(default/busybox-7dff88458-gg7dr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gg7dr"
	E1204 20:11:30.015337       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-gg7dr"
	I1204 20:11:30.015401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:12:05.139969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	E1204 20:12:05.140096       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" pod="kube-system/kindnet-kswc6"
	I1204 20:12:05.140125       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	
	
	==> kubelet <==
	Dec 04 20:13:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:13:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462332    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462375    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465094    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465133    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.466702    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.467091    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469001    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469280    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471311    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471582    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.473913    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.474005    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.358128    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476132    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476169    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.477995    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.478354    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:13 ha-739930 kubelet[1305]: E1204 20:15:13.481441    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343313479636396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:13 ha-739930 kubelet[1305]: E1204 20:15:13.481510    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343313479636396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.165418455s)
ha_test.go:309: expected profile "ha-739930" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-739930\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-739930\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-739930\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.183\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.216\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.176\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.230\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.301010688s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m03_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-739930 node start m02 -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:08:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:08:11.939431   27912 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:08:11.939545   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939555   27912 out.go:358] Setting ErrFile to fd 2...
	I1204 20:08:11.939562   27912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:08:11.939744   27912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:08:11.940314   27912 out.go:352] Setting JSON to false
	I1204 20:08:11.941189   27912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3042,"bootTime":1733339850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:08:11.941293   27912 start.go:139] virtualization: kvm guest
	I1204 20:08:11.944336   27912 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:08:11.945852   27912 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:08:11.945847   27912 notify.go:220] Checking for updates...
	I1204 20:08:11.948662   27912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:08:11.950105   27912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:11.951395   27912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:11.952616   27912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:08:11.953838   27912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:08:11.955060   27912 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:08:11.990494   27912 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 20:08:11.991825   27912 start.go:297] selected driver: kvm2
	I1204 20:08:11.991844   27912 start.go:901] validating driver "kvm2" against <nil>
	I1204 20:08:11.991856   27912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:08:11.992661   27912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:11.992744   27912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:08:12.008005   27912 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:08:12.008178   27912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 20:08:12.008532   27912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:08:12.008571   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:12.008627   27912 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 20:08:12.008639   27912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 20:08:12.008710   27912 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:12.008840   27912 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:08:12.010621   27912 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:08:12.011905   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:12.011946   27912 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:08:12.011958   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:12.012045   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:12.012061   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:12.012439   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:12.012463   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json: {Name:mk7402f769abcec1c18cda99e23fa60ffac7b3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:12.012602   27912 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:12.012630   27912 start.go:364] duration metric: took 16.073µs to acquireMachinesLock for "ha-739930"
	I1204 20:08:12.012648   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:12.012705   27912 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 20:08:12.014265   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:12.014396   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:12.014435   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:12.028697   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I1204 20:08:12.029103   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:12.029651   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:12.029671   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:12.029950   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:12.030110   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:12.030242   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:12.030391   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:12.030413   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:12.030437   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:12.030469   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030485   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030532   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:12.030550   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:12.030563   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:12.030580   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:12.030594   27912 main.go:141] libmachine: (ha-739930) Calling .PreCreateCheck
	I1204 20:08:12.030896   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:12.031303   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:12.031315   27912 main.go:141] libmachine: (ha-739930) Calling .Create
	I1204 20:08:12.031447   27912 main.go:141] libmachine: (ha-739930) Creating KVM machine...
	I1204 20:08:12.032790   27912 main.go:141] libmachine: (ha-739930) DBG | found existing default KVM network
	I1204 20:08:12.033408   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.033271   27935 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I1204 20:08:12.033431   27912 main.go:141] libmachine: (ha-739930) DBG | created network xml: 
	I1204 20:08:12.033442   27912 main.go:141] libmachine: (ha-739930) DBG | <network>
	I1204 20:08:12.033450   27912 main.go:141] libmachine: (ha-739930) DBG |   <name>mk-ha-739930</name>
	I1204 20:08:12.033465   27912 main.go:141] libmachine: (ha-739930) DBG |   <dns enable='no'/>
	I1204 20:08:12.033475   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033484   27912 main.go:141] libmachine: (ha-739930) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 20:08:12.033497   27912 main.go:141] libmachine: (ha-739930) DBG |     <dhcp>
	I1204 20:08:12.033526   27912 main.go:141] libmachine: (ha-739930) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 20:08:12.033560   27912 main.go:141] libmachine: (ha-739930) DBG |     </dhcp>
	I1204 20:08:12.033571   27912 main.go:141] libmachine: (ha-739930) DBG |   </ip>
	I1204 20:08:12.033582   27912 main.go:141] libmachine: (ha-739930) DBG |   
	I1204 20:08:12.033602   27912 main.go:141] libmachine: (ha-739930) DBG | </network>
	I1204 20:08:12.033619   27912 main.go:141] libmachine: (ha-739930) DBG | 
	I1204 20:08:12.038715   27912 main.go:141] libmachine: (ha-739930) DBG | trying to create private KVM network mk-ha-739930 192.168.39.0/24...
	I1204 20:08:12.104228   27912 main.go:141] libmachine: (ha-739930) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.104263   27912 main.go:141] libmachine: (ha-739930) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:12.104273   27912 main.go:141] libmachine: (ha-739930) DBG | private KVM network mk-ha-739930 192.168.39.0/24 created
	I1204 20:08:12.104290   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.104148   27935 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.104318   27912 main.go:141] libmachine: (ha-739930) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:12.357869   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.357760   27935 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa...
	I1204 20:08:12.476934   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476798   27935 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk...
	I1204 20:08:12.476961   27912 main.go:141] libmachine: (ha-739930) DBG | Writing magic tar header
	I1204 20:08:12.476973   27912 main.go:141] libmachine: (ha-739930) DBG | Writing SSH key tar header
	I1204 20:08:12.476980   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:12.476911   27935 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 ...
	I1204 20:08:12.476989   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930
	I1204 20:08:12.477071   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:12.477126   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930 (perms=drwx------)
	I1204 20:08:12.477140   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:12.477159   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:12.477173   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:12.477183   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:12.477188   27912 main.go:141] libmachine: (ha-739930) DBG | Checking permissions on dir: /home
	I1204 20:08:12.477199   27912 main.go:141] libmachine: (ha-739930) DBG | Skipping /home - not owner
	I1204 20:08:12.477241   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:12.477265   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:12.477280   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:12.477294   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:12.477311   27912 main.go:141] libmachine: (ha-739930) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:12.477322   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:12.478077   27912 main.go:141] libmachine: (ha-739930) define libvirt domain using xml: 
	I1204 20:08:12.478098   27912 main.go:141] libmachine: (ha-739930) <domain type='kvm'>
	I1204 20:08:12.478108   27912 main.go:141] libmachine: (ha-739930)   <name>ha-739930</name>
	I1204 20:08:12.478120   27912 main.go:141] libmachine: (ha-739930)   <memory unit='MiB'>2200</memory>
	I1204 20:08:12.478128   27912 main.go:141] libmachine: (ha-739930)   <vcpu>2</vcpu>
	I1204 20:08:12.478137   27912 main.go:141] libmachine: (ha-739930)   <features>
	I1204 20:08:12.478144   27912 main.go:141] libmachine: (ha-739930)     <acpi/>
	I1204 20:08:12.478153   27912 main.go:141] libmachine: (ha-739930)     <apic/>
	I1204 20:08:12.478159   27912 main.go:141] libmachine: (ha-739930)     <pae/>
	I1204 20:08:12.478166   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478176   27912 main.go:141] libmachine: (ha-739930)   </features>
	I1204 20:08:12.478183   27912 main.go:141] libmachine: (ha-739930)   <cpu mode='host-passthrough'>
	I1204 20:08:12.478254   27912 main.go:141] libmachine: (ha-739930)   
	I1204 20:08:12.478278   27912 main.go:141] libmachine: (ha-739930)   </cpu>
	I1204 20:08:12.478290   27912 main.go:141] libmachine: (ha-739930)   <os>
	I1204 20:08:12.478313   27912 main.go:141] libmachine: (ha-739930)     <type>hvm</type>
	I1204 20:08:12.478326   27912 main.go:141] libmachine: (ha-739930)     <boot dev='cdrom'/>
	I1204 20:08:12.478335   27912 main.go:141] libmachine: (ha-739930)     <boot dev='hd'/>
	I1204 20:08:12.478344   27912 main.go:141] libmachine: (ha-739930)     <bootmenu enable='no'/>
	I1204 20:08:12.478354   27912 main.go:141] libmachine: (ha-739930)   </os>
	I1204 20:08:12.478361   27912 main.go:141] libmachine: (ha-739930)   <devices>
	I1204 20:08:12.478371   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='cdrom'>
	I1204 20:08:12.478384   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/boot2docker.iso'/>
	I1204 20:08:12.478394   27912 main.go:141] libmachine: (ha-739930)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:12.478401   27912 main.go:141] libmachine: (ha-739930)       <readonly/>
	I1204 20:08:12.478416   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478430   27912 main.go:141] libmachine: (ha-739930)     <disk type='file' device='disk'>
	I1204 20:08:12.478442   27912 main.go:141] libmachine: (ha-739930)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:12.478457   27912 main.go:141] libmachine: (ha-739930)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/ha-739930.rawdisk'/>
	I1204 20:08:12.478467   27912 main.go:141] libmachine: (ha-739930)       <target dev='hda' bus='virtio'/>
	I1204 20:08:12.478475   27912 main.go:141] libmachine: (ha-739930)     </disk>
	I1204 20:08:12.478490   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478503   27912 main.go:141] libmachine: (ha-739930)       <source network='mk-ha-739930'/>
	I1204 20:08:12.478512   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478520   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478530   27912 main.go:141] libmachine: (ha-739930)     <interface type='network'>
	I1204 20:08:12.478542   27912 main.go:141] libmachine: (ha-739930)       <source network='default'/>
	I1204 20:08:12.478552   27912 main.go:141] libmachine: (ha-739930)       <model type='virtio'/>
	I1204 20:08:12.478599   27912 main.go:141] libmachine: (ha-739930)     </interface>
	I1204 20:08:12.478617   27912 main.go:141] libmachine: (ha-739930)     <serial type='pty'>
	I1204 20:08:12.478622   27912 main.go:141] libmachine: (ha-739930)       <target port='0'/>
	I1204 20:08:12.478628   27912 main.go:141] libmachine: (ha-739930)     </serial>
	I1204 20:08:12.478636   27912 main.go:141] libmachine: (ha-739930)     <console type='pty'>
	I1204 20:08:12.478641   27912 main.go:141] libmachine: (ha-739930)       <target type='serial' port='0'/>
	I1204 20:08:12.478650   27912 main.go:141] libmachine: (ha-739930)     </console>
	I1204 20:08:12.478654   27912 main.go:141] libmachine: (ha-739930)     <rng model='virtio'>
	I1204 20:08:12.478660   27912 main.go:141] libmachine: (ha-739930)       <backend model='random'>/dev/random</backend>
	I1204 20:08:12.478666   27912 main.go:141] libmachine: (ha-739930)     </rng>
	I1204 20:08:12.478671   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478674   27912 main.go:141] libmachine: (ha-739930)     
	I1204 20:08:12.478679   27912 main.go:141] libmachine: (ha-739930)   </devices>
	I1204 20:08:12.478685   27912 main.go:141] libmachine: (ha-739930) </domain>
	I1204 20:08:12.478691   27912 main.go:141] libmachine: (ha-739930) 
	I1204 20:08:12.482962   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:1f:34:29 in network default
	I1204 20:08:12.483451   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:12.483468   27912 main.go:141] libmachine: (ha-739930) Ensuring networks are active...
	I1204 20:08:12.484073   27912 main.go:141] libmachine: (ha-739930) Ensuring network default is active
	I1204 20:08:12.484443   27912 main.go:141] libmachine: (ha-739930) Ensuring network mk-ha-739930 is active
	I1204 20:08:12.485051   27912 main.go:141] libmachine: (ha-739930) Getting domain xml...
	I1204 20:08:12.485709   27912 main.go:141] libmachine: (ha-739930) Creating domain...
	I1204 20:08:13.663232   27912 main.go:141] libmachine: (ha-739930) Waiting to get IP...
	I1204 20:08:13.663928   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.664244   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.664289   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.664239   27935 retry.go:31] will retry after 311.107761ms: waiting for machine to come up
	I1204 20:08:13.976518   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:13.976875   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:13.976897   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:13.976832   27935 retry.go:31] will retry after 302.848525ms: waiting for machine to come up
	I1204 20:08:14.281431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.281818   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.281846   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.281773   27935 retry.go:31] will retry after 460.768304ms: waiting for machine to come up
	I1204 20:08:14.744364   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:14.744813   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:14.744835   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:14.744754   27935 retry.go:31] will retry after 399.590847ms: waiting for machine to come up
	I1204 20:08:15.146387   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.146887   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.146911   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.146850   27935 retry.go:31] will retry after 733.547268ms: waiting for machine to come up
	I1204 20:08:15.882052   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:15.882481   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:15.882509   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:15.882450   27935 retry.go:31] will retry after 598.816129ms: waiting for machine to come up
	I1204 20:08:16.483323   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:16.483724   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:16.483766   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:16.483669   27935 retry.go:31] will retry after 816.886511ms: waiting for machine to come up
	I1204 20:08:17.302385   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:17.302850   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:17.303157   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:17.303086   27935 retry.go:31] will retry after 1.092347228s: waiting for machine to come up
	I1204 20:08:18.397513   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:18.397955   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:18.397979   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:18.397908   27935 retry.go:31] will retry after 1.349280463s: waiting for machine to come up
	I1204 20:08:19.748591   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:19.749086   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:19.749107   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:19.749051   27935 retry.go:31] will retry after 1.929176971s: waiting for machine to come up
	I1204 20:08:21.681322   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:21.681787   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:21.681821   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:21.681719   27935 retry.go:31] will retry after 2.034104658s: waiting for machine to come up
	I1204 20:08:23.717496   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:23.717880   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:23.717910   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:23.717836   27935 retry.go:31] will retry after 2.982891394s: waiting for machine to come up
	I1204 20:08:26.703937   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:26.704406   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:26.704442   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:26.704358   27935 retry.go:31] will retry after 2.968408416s: waiting for machine to come up
	I1204 20:08:29.675768   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:29.676304   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find current IP address of domain ha-739930 in network mk-ha-739930
	I1204 20:08:29.676332   27912 main.go:141] libmachine: (ha-739930) DBG | I1204 20:08:29.676260   27935 retry.go:31] will retry after 5.520024319s: waiting for machine to come up
	I1204 20:08:35.199569   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200041   27912 main.go:141] libmachine: (ha-739930) Found IP for machine: 192.168.39.183
	I1204 20:08:35.200065   27912 main.go:141] libmachine: (ha-739930) Reserving static IP address...
	I1204 20:08:35.200092   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has current primary IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.200437   27912 main.go:141] libmachine: (ha-739930) DBG | unable to find host DHCP lease matching {name: "ha-739930", mac: "52:54:00:b9:91:f7", ip: "192.168.39.183"} in network mk-ha-739930
	I1204 20:08:35.268817   27912 main.go:141] libmachine: (ha-739930) Reserved static IP address: 192.168.39.183
	I1204 20:08:35.268847   27912 main.go:141] libmachine: (ha-739930) Waiting for SSH to be available...
	I1204 20:08:35.268856   27912 main.go:141] libmachine: (ha-739930) DBG | Getting to WaitForSSH function...
	I1204 20:08:35.271480   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271869   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.271895   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.271987   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH client type: external
	I1204 20:08:35.272004   27912 main.go:141] libmachine: (ha-739930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa (-rw-------)
	I1204 20:08:35.272069   27912 main.go:141] libmachine: (ha-739930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:08:35.272087   27912 main.go:141] libmachine: (ha-739930) DBG | About to run SSH command:
	I1204 20:08:35.272103   27912 main.go:141] libmachine: (ha-739930) DBG | exit 0
	I1204 20:08:35.395351   27912 main.go:141] libmachine: (ha-739930) DBG | SSH cmd err, output: <nil>: 
	I1204 20:08:35.395650   27912 main.go:141] libmachine: (ha-739930) KVM machine creation complete!
	I1204 20:08:35.395986   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:35.396534   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396731   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:35.396857   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:08:35.396871   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:35.398039   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:08:35.398051   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:08:35.398055   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:08:35.398060   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.400170   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400525   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.400571   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.400650   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.400812   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.400979   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.401117   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.401289   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.401492   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.401507   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:08:35.502303   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.502340   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:08:35.502352   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.504752   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505142   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.505165   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.505360   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.505545   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505676   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.505789   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.505915   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.506073   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.506082   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:08:35.608173   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:08:35.608233   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:08:35.608240   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:08:35.608247   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608464   27912 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:08:35.608480   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.608679   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.611354   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611746   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.611772   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.611904   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.612062   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612200   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.612312   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.612460   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.612630   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.612642   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:08:35.730422   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:08:35.730456   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.732817   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733139   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.733168   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.733310   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.733480   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733651   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.733802   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.733983   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:35.734154   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:35.734171   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:08:35.843780   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:08:35.843821   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:08:35.843865   27912 buildroot.go:174] setting up certificates
	I1204 20:08:35.843880   27912 provision.go:84] configureAuth start
	I1204 20:08:35.843894   27912 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:08:35.844232   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:35.847046   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847366   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.847411   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.847570   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.849830   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850112   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.850131   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.850320   27912 provision.go:143] copyHostCerts
	I1204 20:08:35.850348   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850382   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:08:35.850391   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:08:35.850460   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:08:35.850567   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850595   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:08:35.850604   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:08:35.850645   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:08:35.850723   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850741   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:08:35.850748   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:08:35.850772   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:08:35.850823   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:08:35.983720   27912 provision.go:177] copyRemoteCerts
	I1204 20:08:35.983786   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:08:35.983810   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:35.986241   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986583   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:35.986614   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:35.986772   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:35.986960   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:35.987093   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:35.987240   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.068879   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:08:36.068950   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1204 20:08:36.091202   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:08:36.091259   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:08:36.112918   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:08:36.112998   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:08:36.134856   27912 provision.go:87] duration metric: took 290.963844ms to configureAuth
	I1204 20:08:36.134887   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:08:36.135063   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:36.135153   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.137760   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138113   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.138138   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.138342   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.138505   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138658   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.138779   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.138924   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.139114   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.139131   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:08:36.346218   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:08:36.346255   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:08:36.346283   27912 main.go:141] libmachine: (ha-739930) Calling .GetURL
	I1204 20:08:36.347448   27912 main.go:141] libmachine: (ha-739930) DBG | Using libvirt version 6000000
	I1204 20:08:36.349418   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349723   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.349742   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.349920   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:08:36.349936   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:08:36.349943   27912 client.go:171] duration metric: took 24.3195237s to LocalClient.Create
	I1204 20:08:36.349963   27912 start.go:167] duration metric: took 24.319574814s to libmachine.API.Create "ha-739930"
	I1204 20:08:36.349976   27912 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:08:36.349991   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:08:36.350013   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.350205   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:08:36.350228   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.351979   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352286   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.352313   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.352437   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.352594   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.352706   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.352816   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.432460   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:08:36.436012   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:08:36.436028   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:08:36.436089   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:08:36.436188   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:08:36.436201   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:08:36.436304   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:08:36.444678   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:36.467397   27912 start.go:296] duration metric: took 117.407014ms for postStartSetup
	I1204 20:08:36.467437   27912 main.go:141] libmachine: (ha-739930) Calling .GetConfigRaw
	I1204 20:08:36.467977   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.470186   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470558   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.470586   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.470798   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:36.470974   27912 start.go:128] duration metric: took 24.458260215s to createHost
	I1204 20:08:36.470996   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.472973   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473263   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.473284   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.473418   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.473574   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473716   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.473887   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.474035   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:08:36.474202   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:08:36.474217   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:08:36.575008   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342916.551867748
	
	I1204 20:08:36.575023   27912 fix.go:216] guest clock: 1733342916.551867748
	I1204 20:08:36.575030   27912 fix.go:229] Guest: 2024-12-04 20:08:36.551867748 +0000 UTC Remote: 2024-12-04 20:08:36.470986638 +0000 UTC m=+24.568358011 (delta=80.88111ms)
	I1204 20:08:36.575056   27912 fix.go:200] guest clock delta is within tolerance: 80.88111ms
	I1204 20:08:36.575080   27912 start.go:83] releasing machines lock for "ha-739930", held for 24.56242194s
	I1204 20:08:36.575103   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.575310   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:36.577787   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578087   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.578125   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.578233   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578645   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578807   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:36.578883   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:08:36.578924   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.579001   27912 ssh_runner.go:195] Run: cat /version.json
	I1204 20:08:36.579018   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:36.581456   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581787   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.581809   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581864   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.581930   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582100   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582239   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582276   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:36.582299   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:36.582396   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.582566   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:36.582713   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:36.582863   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:36.582989   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:36.675618   27912 ssh_runner.go:195] Run: systemctl --version
	I1204 20:08:36.681185   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:08:36.833908   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:08:36.839964   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:08:36.840024   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:08:36.855758   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:08:36.855780   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:08:36.855848   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:08:36.870692   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:08:36.883541   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:08:36.883596   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:08:36.896118   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:08:36.908920   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:08:37.025056   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:08:37.187310   27912 docker.go:233] disabling docker service ...
	I1204 20:08:37.187365   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:08:37.200934   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:08:37.212871   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:08:37.332646   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:08:37.440309   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:08:37.453353   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:08:37.470970   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:08:37.471030   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.480927   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:08:37.481009   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.491149   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.500802   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.510374   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:08:37.520079   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.529955   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.545993   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:08:37.555622   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:08:37.564180   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:08:37.564228   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:08:37.576296   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:08:37.585144   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:37.693931   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:08:37.777449   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:08:37.777509   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:08:37.781553   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:08:37.781604   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:08:37.784811   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:08:37.822634   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:08:37.822702   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.848190   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:08:37.873431   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:08:37.874606   27912 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:08:37.877259   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877590   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:37.877619   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:37.877786   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:08:37.881175   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:37.892903   27912 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:08:37.892996   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:37.893068   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:37.926070   27912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 20:08:37.926123   27912 ssh_runner.go:195] Run: which lz4
	I1204 20:08:37.929507   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 20:08:37.929636   27912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:08:37.933391   27912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:08:37.933415   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 20:08:39.139354   27912 crio.go:462] duration metric: took 1.209791733s to copy over tarball
	I1204 20:08:39.139460   27912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:08:41.096167   27912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.956678939s)
	I1204 20:08:41.096191   27912 crio.go:469] duration metric: took 1.956790325s to extract the tarball
	I1204 20:08:41.096199   27912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:08:41.132019   27912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:08:41.174932   27912 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:08:41.174955   27912 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:08:41.174962   27912 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:08:41.175056   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:08:41.175118   27912 ssh_runner.go:195] Run: crio config
	I1204 20:08:41.217894   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:41.217917   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:41.217927   27912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:08:41.217952   27912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:08:41.218081   27912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:08:41.218111   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:08:41.218165   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:08:41.233083   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:08:41.233174   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:08:41.233229   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:08:41.242410   27912 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:08:41.242479   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:08:41.251172   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:08:41.266346   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:08:41.281669   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:08:41.296753   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 20:08:41.311501   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:08:41.314975   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:08:41.325862   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:08:41.458198   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:08:41.473798   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:08:41.473814   27912 certs.go:194] generating shared ca certs ...
	I1204 20:08:41.473829   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.473951   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:08:41.473998   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:08:41.474012   27912 certs.go:256] generating profile certs ...
	I1204 20:08:41.474071   27912 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:08:41.474104   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt with IP's: []
	I1204 20:08:41.679553   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt ...
	I1204 20:08:41.679577   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt: {Name:mk3cb32626a63b25e9bcb53dbf57982e8c59176a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679756   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key ...
	I1204 20:08:41.679770   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key: {Name:mk5952f9a719bbb3868bb675769b7b60346c6fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:41.679866   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395
	I1204 20:08:41.679888   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1204 20:08:42.002083   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 ...
	I1204 20:08:42.002109   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395: {Name:mk5f9c87f1a9d17c216fb1ba76a871a4d200a2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002298   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 ...
	I1204 20:08:42.002314   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395: {Name:mkbc19c0135d212682268a777ef3380b2e19b0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.002409   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:08:42.002519   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.84e45395 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:08:42.002573   27912 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:08:42.002587   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt with IP's: []
	I1204 20:08:42.211018   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt ...
	I1204 20:08:42.211049   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt: {Name:mkf1a9add2f9343bc4f70a7fa70f135cc4d00f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211250   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key ...
	I1204 20:08:42.211265   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key: {Name:mkb8fc6229780db95a674383629b517d0cfa035d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:42.211361   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:08:42.211400   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:08:42.211422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:08:42.211442   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:08:42.211459   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:08:42.211477   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:08:42.211491   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:08:42.211508   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:08:42.211575   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:08:42.211622   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:08:42.211635   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:08:42.211671   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:08:42.211703   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:08:42.211734   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:08:42.211789   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:08:42.211826   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.211847   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.211866   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.212397   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:08:42.248354   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:08:42.283210   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:08:42.315759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:08:42.337377   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:08:42.359236   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:08:42.380567   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:08:42.402068   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:08:42.423840   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:08:42.445088   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:08:42.466154   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:08:42.487261   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:08:42.502237   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:08:42.507399   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:08:42.517386   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521412   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.521456   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:08:42.526682   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:08:42.536595   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:08:42.546422   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550778   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.550834   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:08:42.556366   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:08:42.567110   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:08:42.577648   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581927   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.581970   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:08:42.587418   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:08:42.598017   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:08:42.601905   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:08:42.601960   27912 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:08:42.602029   27912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:08:42.602067   27912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:08:42.638904   27912 cri.go:89] found id: ""
	I1204 20:08:42.638964   27912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:08:42.648459   27912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:08:42.657551   27912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:08:42.666519   27912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:08:42.666536   27912 kubeadm.go:157] found existing configuration files:
	
	I1204 20:08:42.666571   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:08:42.675036   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:08:42.675086   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:08:42.683928   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:08:42.692253   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:08:42.692304   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:08:42.701014   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.709166   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:08:42.709204   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:08:42.718070   27912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:08:42.726526   27912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:08:42.726584   27912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:08:42.735312   27912 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 20:08:42.947971   27912 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 20:08:54.006500   27912 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 20:08:54.006550   27912 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 20:08:54.006630   27912 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 20:08:54.006748   27912 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 20:08:54.006901   27912 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 20:08:54.006999   27912 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 20:08:54.008316   27912 out.go:235]   - Generating certificates and keys ...
	I1204 20:08:54.008397   27912 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 20:08:54.008459   27912 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 20:08:54.008548   27912 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 20:08:54.008635   27912 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 20:08:54.008695   27912 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 20:08:54.008737   27912 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 20:08:54.008784   27912 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 20:08:54.008879   27912 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.008924   27912 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 20:08:54.009023   27912 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-739930 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1204 20:08:54.009133   27912 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 20:08:54.009245   27912 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 20:08:54.009321   27912 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 20:08:54.009403   27912 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 20:08:54.009487   27912 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 20:08:54.009570   27912 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 20:08:54.009644   27912 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 20:08:54.009733   27912 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 20:08:54.009810   27912 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 20:08:54.009903   27912 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 20:08:54.009962   27912 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 20:08:54.011358   27912 out.go:235]   - Booting up control plane ...
	I1204 20:08:54.011484   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 20:08:54.011569   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 20:08:54.011635   27912 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 20:08:54.011728   27912 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 20:08:54.011808   27912 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 20:08:54.011842   27912 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 20:08:54.011948   27912 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 20:08:54.012038   27912 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 20:08:54.012094   27912 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001462808s
	I1204 20:08:54.012172   27912 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 20:08:54.012262   27912 kubeadm.go:310] [api-check] The API server is healthy after 6.02019816s
	I1204 20:08:54.012392   27912 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 20:08:54.012536   27912 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 20:08:54.012619   27912 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 20:08:54.012799   27912 kubeadm.go:310] [mark-control-plane] Marking the node ha-739930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 20:08:54.012886   27912 kubeadm.go:310] [bootstrap-token] Using token: borrl1.p9d68mzgpldkynyz
	I1204 20:08:54.013953   27912 out.go:235]   - Configuring RBAC rules ...
	I1204 20:08:54.014046   27912 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 20:08:54.014140   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 20:08:54.014307   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 20:08:54.014473   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 20:08:54.014571   27912 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 20:08:54.014670   27912 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 20:08:54.014826   27912 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 20:08:54.014865   27912 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 20:08:54.014923   27912 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 20:08:54.014933   27912 kubeadm.go:310] 
	I1204 20:08:54.015010   27912 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 20:08:54.015019   27912 kubeadm.go:310] 
	I1204 20:08:54.015144   27912 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 20:08:54.015156   27912 kubeadm.go:310] 
	I1204 20:08:54.015195   27912 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 20:08:54.015270   27912 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 20:08:54.015320   27912 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 20:08:54.015326   27912 kubeadm.go:310] 
	I1204 20:08:54.015392   27912 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 20:08:54.015402   27912 kubeadm.go:310] 
	I1204 20:08:54.015442   27912 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 20:08:54.015451   27912 kubeadm.go:310] 
	I1204 20:08:54.015493   27912 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 20:08:54.015582   27912 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 20:08:54.015675   27912 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 20:08:54.015684   27912 kubeadm.go:310] 
	I1204 20:08:54.015786   27912 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 20:08:54.015895   27912 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 20:08:54.015905   27912 kubeadm.go:310] 
	I1204 20:08:54.016003   27912 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016093   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 20:08:54.016113   27912 kubeadm.go:310] 	--control-plane 
	I1204 20:08:54.016117   27912 kubeadm.go:310] 
	I1204 20:08:54.016205   27912 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 20:08:54.016217   27912 kubeadm.go:310] 
	I1204 20:08:54.016293   27912 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token borrl1.p9d68mzgpldkynyz \
	I1204 20:08:54.016397   27912 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 20:08:54.016411   27912 cni.go:84] Creating CNI manager for ""
	I1204 20:08:54.016416   27912 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 20:08:54.017939   27912 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 20:08:54.019064   27912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 20:08:54.023950   27912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 20:08:54.023967   27912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 20:08:54.041186   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 20:08:54.359013   27912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 20:08:54.359083   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:54.359121   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930 minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=true
	I1204 20:08:54.395990   27912 ops.go:34] apiserver oom_adj: -16
	I1204 20:08:54.548524   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.049558   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:55.548661   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.048619   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:56.549070   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.048848   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:57.549554   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.048830   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 20:08:58.161390   27912 kubeadm.go:1113] duration metric: took 3.80235484s to wait for elevateKubeSystemPrivileges
	I1204 20:08:58.161423   27912 kubeadm.go:394] duration metric: took 15.559467425s to StartCluster
	I1204 20:08:58.161444   27912 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.161514   27912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.162310   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:08:58.162533   27912 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:58.162562   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:08:58.162544   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 20:08:58.162557   27912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 20:08:58.162652   27912 addons.go:69] Setting storage-provisioner=true in profile "ha-739930"
	I1204 20:08:58.162661   27912 addons.go:69] Setting default-storageclass=true in profile "ha-739930"
	I1204 20:08:58.162674   27912 addons.go:234] Setting addon storage-provisioner=true in "ha-739930"
	I1204 20:08:58.162693   27912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-739930"
	I1204 20:08:58.162706   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.162718   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:58.163133   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163137   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.163158   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.163161   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.177830   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I1204 20:08:58.177986   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I1204 20:08:58.178299   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178427   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.178779   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.178807   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.178981   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.179001   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.179143   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179321   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.179506   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.179650   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.179676   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.181633   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:08:58.181895   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:08:58.182308   27912 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 20:08:58.182493   27912 addons.go:234] Setting addon default-storageclass=true in "ha-739930"
	I1204 20:08:58.182532   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:08:58.182790   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.182824   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.194517   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I1204 20:08:58.194972   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.195484   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.195512   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.195872   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.196070   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.197298   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1204 20:08:58.197610   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.197777   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.198114   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.198138   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.198429   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.198834   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:58.198862   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:58.199309   27912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:08:58.200430   27912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.200452   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 20:08:58.200469   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.203367   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203781   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.203808   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.203943   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.204099   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.204233   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.204358   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.213101   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 20:08:58.213504   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:58.214031   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:58.214059   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:58.214380   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:58.214549   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:08:58.216016   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:08:58.216199   27912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.216211   27912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 20:08:58.216223   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:08:58.218960   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219280   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:08:58.219317   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:08:58.219479   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:08:58.219661   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:08:58.219835   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:08:58.219997   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:08:58.277316   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 20:08:58.357820   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:08:58.374108   27912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 20:08:58.721001   27912 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 20:08:59.051895   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051921   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.051951   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.051972   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052204   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052222   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052231   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052241   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052293   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052317   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.052325   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.052322   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.052332   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.052462   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.052473   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053776   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.053794   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.053805   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.053870   27912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 20:08:59.053894   27912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 20:08:59.053992   27912 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 20:08:59.054003   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.054010   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.054014   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.064602   27912 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 20:08:59.065317   27912 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 20:08:59.065335   27912 round_trippers.go:469] Request Headers:
	I1204 20:08:59.065347   27912 round_trippers.go:473]     Content-Type: application/json
	I1204 20:08:59.065354   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:08:59.065359   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:08:59.068638   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:08:59.068754   27912 main.go:141] libmachine: Making call to close driver server
	I1204 20:08:59.068772   27912 main.go:141] libmachine: (ha-739930) Calling .Close
	I1204 20:08:59.068971   27912 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:08:59.068989   27912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:08:59.069005   27912 main.go:141] libmachine: (ha-739930) DBG | Closing plugin on server side
	I1204 20:08:59.071139   27912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 20:08:59.072109   27912 addons.go:510] duration metric: took 909.550558ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 20:08:59.072142   27912 start.go:246] waiting for cluster config update ...
	I1204 20:08:59.072151   27912 start.go:255] writing updated cluster config ...
	I1204 20:08:59.073463   27912 out.go:201] 
	I1204 20:08:59.074725   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:08:59.074813   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.076300   27912 out.go:177] * Starting "ha-739930-m02" control-plane node in "ha-739930" cluster
	I1204 20:08:59.077339   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:08:59.077359   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:08:59.077447   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:08:59.077461   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:08:59.077541   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:08:59.077723   27912 start.go:360] acquireMachinesLock for ha-739930-m02: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:08:59.077776   27912 start.go:364] duration metric: took 30.982µs to acquireMachinesLock for "ha-739930-m02"
	I1204 20:08:59.077798   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:08:59.077880   27912 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 20:08:59.079261   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:08:59.079340   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:08:59.079368   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:08:59.093684   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1204 20:08:59.094078   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:08:59.094558   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:08:59.094579   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:08:59.094913   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:08:59.095089   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:08:59.095236   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:08:59.095406   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:08:59.095437   27912 client.go:168] LocalClient.Create starting
	I1204 20:08:59.095465   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:08:59.095493   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095505   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095551   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:08:59.095568   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:08:59.095579   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:08:59.095595   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:08:59.095602   27912 main.go:141] libmachine: (ha-739930-m02) Calling .PreCreateCheck
	I1204 20:08:59.095756   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:08:59.096074   27912 main.go:141] libmachine: Creating machine...
	I1204 20:08:59.096086   27912 main.go:141] libmachine: (ha-739930-m02) Calling .Create
	I1204 20:08:59.096214   27912 main.go:141] libmachine: (ha-739930-m02) Creating KVM machine...
	I1204 20:08:59.097249   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing default KVM network
	I1204 20:08:59.097426   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found existing private KVM network mk-ha-739930
	I1204 20:08:59.097515   27912 main.go:141] libmachine: (ha-739930-m02) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.097549   27912 main.go:141] libmachine: (ha-739930-m02) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:08:59.097603   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.097507   28291 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.097713   27912 main.go:141] libmachine: (ha-739930-m02) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:08:59.334730   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.334621   28291 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa...
	I1204 20:08:59.653553   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653411   28291 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk...
	I1204 20:08:59.653587   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing magic tar header
	I1204 20:08:59.653647   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Writing SSH key tar header
	I1204 20:08:59.653678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:08:59.653561   28291 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 ...
	I1204 20:08:59.653704   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02 (perms=drwx------)
	I1204 20:08:59.653726   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02
	I1204 20:08:59.653737   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:08:59.653758   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:08:59.653773   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:08:59.653785   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:08:59.653796   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:08:59.653813   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:08:59.653825   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:08:59.653838   27912 main.go:141] libmachine: (ha-739930-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:08:59.653850   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:08:59.653865   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:08:59.653875   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:08:59.653889   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Checking permissions on dir: /home
	I1204 20:08:59.653903   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Skipping /home - not owner
	I1204 20:08:59.654725   27912 main.go:141] libmachine: (ha-739930-m02) define libvirt domain using xml: 
	I1204 20:08:59.654740   27912 main.go:141] libmachine: (ha-739930-m02) <domain type='kvm'>
	I1204 20:08:59.654751   27912 main.go:141] libmachine: (ha-739930-m02)   <name>ha-739930-m02</name>
	I1204 20:08:59.654763   27912 main.go:141] libmachine: (ha-739930-m02)   <memory unit='MiB'>2200</memory>
	I1204 20:08:59.654775   27912 main.go:141] libmachine: (ha-739930-m02)   <vcpu>2</vcpu>
	I1204 20:08:59.654788   27912 main.go:141] libmachine: (ha-739930-m02)   <features>
	I1204 20:08:59.654796   27912 main.go:141] libmachine: (ha-739930-m02)     <acpi/>
	I1204 20:08:59.654806   27912 main.go:141] libmachine: (ha-739930-m02)     <apic/>
	I1204 20:08:59.654818   27912 main.go:141] libmachine: (ha-739930-m02)     <pae/>
	I1204 20:08:59.654837   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.654847   27912 main.go:141] libmachine: (ha-739930-m02)   </features>
	I1204 20:08:59.654851   27912 main.go:141] libmachine: (ha-739930-m02)   <cpu mode='host-passthrough'>
	I1204 20:08:59.654858   27912 main.go:141] libmachine: (ha-739930-m02)   
	I1204 20:08:59.654862   27912 main.go:141] libmachine: (ha-739930-m02)   </cpu>
	I1204 20:08:59.654870   27912 main.go:141] libmachine: (ha-739930-m02)   <os>
	I1204 20:08:59.654874   27912 main.go:141] libmachine: (ha-739930-m02)     <type>hvm</type>
	I1204 20:08:59.654882   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='cdrom'/>
	I1204 20:08:59.654892   27912 main.go:141] libmachine: (ha-739930-m02)     <boot dev='hd'/>
	I1204 20:08:59.654905   27912 main.go:141] libmachine: (ha-739930-m02)     <bootmenu enable='no'/>
	I1204 20:08:59.654916   27912 main.go:141] libmachine: (ha-739930-m02)   </os>
	I1204 20:08:59.654941   27912 main.go:141] libmachine: (ha-739930-m02)   <devices>
	I1204 20:08:59.654966   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='cdrom'>
	I1204 20:08:59.654982   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/boot2docker.iso'/>
	I1204 20:08:59.654997   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hdc' bus='scsi'/>
	I1204 20:08:59.655013   27912 main.go:141] libmachine: (ha-739930-m02)       <readonly/>
	I1204 20:08:59.655023   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655035   27912 main.go:141] libmachine: (ha-739930-m02)     <disk type='file' device='disk'>
	I1204 20:08:59.655049   27912 main.go:141] libmachine: (ha-739930-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:08:59.655067   27912 main.go:141] libmachine: (ha-739930-m02)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/ha-739930-m02.rawdisk'/>
	I1204 20:08:59.655083   27912 main.go:141] libmachine: (ha-739930-m02)       <target dev='hda' bus='virtio'/>
	I1204 20:08:59.655095   27912 main.go:141] libmachine: (ha-739930-m02)     </disk>
	I1204 20:08:59.655104   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655117   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='mk-ha-739930'/>
	I1204 20:08:59.655129   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655141   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655157   27912 main.go:141] libmachine: (ha-739930-m02)     <interface type='network'>
	I1204 20:08:59.655176   27912 main.go:141] libmachine: (ha-739930-m02)       <source network='default'/>
	I1204 20:08:59.655187   27912 main.go:141] libmachine: (ha-739930-m02)       <model type='virtio'/>
	I1204 20:08:59.655199   27912 main.go:141] libmachine: (ha-739930-m02)     </interface>
	I1204 20:08:59.655208   27912 main.go:141] libmachine: (ha-739930-m02)     <serial type='pty'>
	I1204 20:08:59.655231   27912 main.go:141] libmachine: (ha-739930-m02)       <target port='0'/>
	I1204 20:08:59.655250   27912 main.go:141] libmachine: (ha-739930-m02)     </serial>
	I1204 20:08:59.655268   27912 main.go:141] libmachine: (ha-739930-m02)     <console type='pty'>
	I1204 20:08:59.655284   27912 main.go:141] libmachine: (ha-739930-m02)       <target type='serial' port='0'/>
	I1204 20:08:59.655295   27912 main.go:141] libmachine: (ha-739930-m02)     </console>
	I1204 20:08:59.655302   27912 main.go:141] libmachine: (ha-739930-m02)     <rng model='virtio'>
	I1204 20:08:59.655315   27912 main.go:141] libmachine: (ha-739930-m02)       <backend model='random'>/dev/random</backend>
	I1204 20:08:59.655321   27912 main.go:141] libmachine: (ha-739930-m02)     </rng>
	I1204 20:08:59.655329   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655333   27912 main.go:141] libmachine: (ha-739930-m02)     
	I1204 20:08:59.655340   27912 main.go:141] libmachine: (ha-739930-m02)   </devices>
	I1204 20:08:59.655345   27912 main.go:141] libmachine: (ha-739930-m02) </domain>
	I1204 20:08:59.655362   27912 main.go:141] libmachine: (ha-739930-m02) 
	I1204 20:08:59.661230   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:69:55:bb in network default
	I1204 20:08:59.661784   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:08:59.661806   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring networks are active...
	I1204 20:08:59.662333   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network default is active
	I1204 20:08:59.662568   27912 main.go:141] libmachine: (ha-739930-m02) Ensuring network mk-ha-739930 is active
	I1204 20:08:59.662825   27912 main.go:141] libmachine: (ha-739930-m02) Getting domain xml...
	I1204 20:08:59.663438   27912 main.go:141] libmachine: (ha-739930-m02) Creating domain...
	I1204 20:09:00.864454   27912 main.go:141] libmachine: (ha-739930-m02) Waiting to get IP...
	I1204 20:09:00.865262   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:00.865678   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:00.865706   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:00.865644   28291 retry.go:31] will retry after 202.440812ms: waiting for machine to come up
	I1204 20:09:01.070038   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.070521   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.070539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.070483   28291 retry.go:31] will retry after 379.96661ms: waiting for machine to come up
	I1204 20:09:01.452279   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.452670   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.452703   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.452620   28291 retry.go:31] will retry after 448.23669ms: waiting for machine to come up
	I1204 20:09:01.902848   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:01.903274   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:01.903301   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:01.903230   28291 retry.go:31] will retry after 590.399252ms: waiting for machine to come up
	I1204 20:09:02.495129   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:02.495572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:02.495602   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:02.495522   28291 retry.go:31] will retry after 535.882434ms: waiting for machine to come up
	I1204 20:09:03.033125   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.033552   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.033572   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.033531   28291 retry.go:31] will retry after 698.598885ms: waiting for machine to come up
	I1204 20:09:03.733894   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:03.734321   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:03.734351   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:03.734276   28291 retry.go:31] will retry after 1.177854854s: waiting for machine to come up
	I1204 20:09:04.914541   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:04.914975   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:04.915005   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:04.914934   28291 retry.go:31] will retry after 1.093246259s: waiting for machine to come up
	I1204 20:09:06.010091   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:06.010517   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:06.010543   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:06.010478   28291 retry.go:31] will retry after 1.613080477s: waiting for machine to come up
	I1204 20:09:07.624874   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:07.625335   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:07.625364   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:07.625313   28291 retry.go:31] will retry after 2.249296346s: waiting for machine to come up
	I1204 20:09:09.875662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:09.876187   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:09.876218   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:09.876124   28291 retry.go:31] will retry after 2.42642151s: waiting for machine to come up
	I1204 20:09:12.305633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:12.306060   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:12.306085   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:12.306030   28291 retry.go:31] will retry after 2.221078432s: waiting for machine to come up
	I1204 20:09:14.529048   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:14.529558   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:14.529585   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:14.529522   28291 retry.go:31] will retry after 2.966790247s: waiting for machine to come up
	I1204 20:09:17.499601   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:17.500108   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find current IP address of domain ha-739930-m02 in network mk-ha-739930
	I1204 20:09:17.500137   27912 main.go:141] libmachine: (ha-739930-m02) DBG | I1204 20:09:17.500054   28291 retry.go:31] will retry after 4.394406199s: waiting for machine to come up
	I1204 20:09:21.898072   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898515   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.898531   27912 main.go:141] libmachine: (ha-739930-m02) Found IP for machine: 192.168.39.216
	I1204 20:09:21.898543   27912 main.go:141] libmachine: (ha-739930-m02) Reserving static IP address...
	I1204 20:09:21.899016   27912 main.go:141] libmachine: (ha-739930-m02) DBG | unable to find host DHCP lease matching {name: "ha-739930-m02", mac: "52:54:00:91:b2:c1", ip: "192.168.39.216"} in network mk-ha-739930
	I1204 20:09:21.970499   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Getting to WaitForSSH function...
	I1204 20:09:21.970531   27912 main.go:141] libmachine: (ha-739930-m02) Reserved static IP address: 192.168.39.216
	I1204 20:09:21.970544   27912 main.go:141] libmachine: (ha-739930-m02) Waiting for SSH to be available...
	I1204 20:09:21.972885   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973270   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:21.973299   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:21.973444   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH client type: external
	I1204 20:09:21.973472   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa (-rw-------)
	I1204 20:09:21.973507   27912 main.go:141] libmachine: (ha-739930-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:09:21.973526   27912 main.go:141] libmachine: (ha-739930-m02) DBG | About to run SSH command:
	I1204 20:09:21.973534   27912 main.go:141] libmachine: (ha-739930-m02) DBG | exit 0
	I1204 20:09:22.099805   27912 main.go:141] libmachine: (ha-739930-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 20:09:22.100058   27912 main.go:141] libmachine: (ha-739930-m02) KVM machine creation complete!
	I1204 20:09:22.100415   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:22.101293   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101487   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:22.101644   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:09:22.101669   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetState
	I1204 20:09:22.102974   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:09:22.102992   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:09:22.103000   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:09:22.103008   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.105264   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105562   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.105595   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.105759   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.105924   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106031   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.106146   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.106307   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.106556   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.106582   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:09:22.210652   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.210674   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:09:22.210689   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.213316   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213633   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.213662   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.213775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.213923   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214102   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.214252   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.214405   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.214561   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.214571   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:09:22.320078   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:09:22.320145   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:09:22.320155   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:09:22.320176   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320420   27912 buildroot.go:166] provisioning hostname "ha-739930-m02"
	I1204 20:09:22.320451   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.320599   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.322962   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323306   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.323331   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.323525   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.323704   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323837   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.323937   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.324095   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.324248   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.324260   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m02 && echo "ha-739930-m02" | sudo tee /etc/hostname
	I1204 20:09:22.442684   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m02
	
	I1204 20:09:22.442712   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.445503   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.445841   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.445866   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.446028   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.446227   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446390   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.446547   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.446707   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.446886   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.446908   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:09:22.560132   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:09:22.560177   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:09:22.560210   27912 buildroot.go:174] setting up certificates
	I1204 20:09:22.560227   27912 provision.go:84] configureAuth start
	I1204 20:09:22.560246   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetMachineName
	I1204 20:09:22.560519   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:22.563054   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563443   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.563470   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.563600   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.565613   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.565936   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.565961   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.566074   27912 provision.go:143] copyHostCerts
	I1204 20:09:22.566103   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566138   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:09:22.566151   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:09:22.566226   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:09:22.566301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566318   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:09:22.566325   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:09:22.566349   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:09:22.566391   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566409   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:09:22.566415   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:09:22.566442   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:09:22.566488   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m02 san=[127.0.0.1 192.168.39.216 ha-739930-m02 localhost minikube]
	I1204 20:09:22.637792   27912 provision.go:177] copyRemoteCerts
	I1204 20:09:22.637844   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:09:22.637865   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.640451   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.640844   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.640870   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.641017   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.641198   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.641358   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.641490   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:22.721358   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:09:22.721454   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:09:22.745038   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:09:22.745117   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:09:22.767198   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:09:22.767272   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:09:22.788710   27912 provision.go:87] duration metric: took 228.465669ms to configureAuth
	I1204 20:09:22.788740   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:09:22.788919   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:22.788987   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:22.791733   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792076   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:22.792099   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:22.792317   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:22.792506   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792661   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:22.792775   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:22.792909   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:22.793086   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:22.793106   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:09:23.010014   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:09:23.010040   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:09:23.010051   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetURL
	I1204 20:09:23.011214   27912 main.go:141] libmachine: (ha-739930-m02) DBG | Using libvirt version 6000000
	I1204 20:09:23.013200   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.013554   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.013737   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:09:23.013756   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:09:23.013764   27912 client.go:171] duration metric: took 23.918317311s to LocalClient.Create
	I1204 20:09:23.013791   27912 start.go:167] duration metric: took 23.918385611s to libmachine.API.Create "ha-739930"
	I1204 20:09:23.013802   27912 start.go:293] postStartSetup for "ha-739930-m02" (driver="kvm2")
	I1204 20:09:23.013810   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:09:23.013826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.014037   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:09:23.014061   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.016336   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016674   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.016696   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.016826   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.017001   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.017147   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.017302   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.098690   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:09:23.102672   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:09:23.102692   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:09:23.102751   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:09:23.102837   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:09:23.102850   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:09:23.102957   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:09:23.113316   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:23.137226   27912 start.go:296] duration metric: took 123.412538ms for postStartSetup
	I1204 20:09:23.137272   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetConfigRaw
	I1204 20:09:23.137827   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.140225   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140510   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.140539   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.140708   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:09:23.140912   27912 start.go:128] duration metric: took 24.063021139s to createHost
	I1204 20:09:23.140935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.143463   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143769   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.143788   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.143935   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.144107   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144264   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.144405   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.144585   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:09:23.144731   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1204 20:09:23.144740   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:09:23.251984   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342963.229753214
	
	I1204 20:09:23.252009   27912 fix.go:216] guest clock: 1733342963.229753214
	I1204 20:09:23.252019   27912 fix.go:229] Guest: 2024-12-04 20:09:23.229753214 +0000 UTC Remote: 2024-12-04 20:09:23.140925676 +0000 UTC m=+71.238297049 (delta=88.827538ms)
	I1204 20:09:23.252039   27912 fix.go:200] guest clock delta is within tolerance: 88.827538ms
	I1204 20:09:23.252046   27912 start.go:83] releasing machines lock for "ha-739930-m02", held for 24.174259167s
	I1204 20:09:23.252070   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.252303   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:23.254849   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.255234   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.255263   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.257539   27912 out.go:177] * Found network options:
	I1204 20:09:23.258745   27912 out.go:177]   - NO_PROXY=192.168.39.183
	W1204 20:09:23.259924   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.259962   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260454   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260610   27912 main.go:141] libmachine: (ha-739930-m02) Calling .DriverName
	I1204 20:09:23.260694   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:09:23.260738   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	W1204 20:09:23.260771   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:09:23.260841   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:09:23.260863   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHHostname
	I1204 20:09:23.263151   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263477   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.263505   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263524   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.263671   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.263841   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.263988   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.263998   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:23.264025   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:23.264114   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.264181   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHPort
	I1204 20:09:23.264329   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHKeyPath
	I1204 20:09:23.264459   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetSSHUsername
	I1204 20:09:23.264614   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m02/id_rsa Username:docker}
	I1204 20:09:23.488607   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:09:23.493980   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:09:23.494034   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:09:23.509548   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:09:23.509575   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:09:23.509645   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:09:23.525800   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:09:23.539440   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:09:23.539502   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:09:23.552521   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:09:23.565606   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:09:23.684851   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:09:23.845149   27912 docker.go:233] disabling docker service ...
	I1204 20:09:23.845231   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:09:23.859120   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:09:23.871561   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:09:23.987397   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:09:24.126711   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:09:24.141506   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:09:24.159151   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:09:24.159228   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.170226   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:09:24.170291   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.182530   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.192731   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.202617   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:09:24.213736   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.224231   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.240767   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:09:24.251003   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:09:24.260142   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:09:24.260204   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:09:24.272434   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:09:24.282354   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:24.398398   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:09:24.487789   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:09:24.487861   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:09:24.492488   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:09:24.492560   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:09:24.496257   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:09:24.535274   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:09:24.535361   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.562604   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:09:24.590689   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:09:24.591986   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:09:24.593151   27912 main.go:141] libmachine: (ha-739930-m02) Calling .GetIP
	I1204 20:09:24.595599   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.595887   27912 main.go:141] libmachine: (ha-739930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b2:c1", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:09:13 +0000 UTC Type:0 Mac:52:54:00:91:b2:c1 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-739930-m02 Clientid:01:52:54:00:91:b2:c1}
	I1204 20:09:24.595916   27912 main.go:141] libmachine: (ha-739930-m02) DBG | domain ha-739930-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:91:b2:c1 in network mk-ha-739930
	I1204 20:09:24.596077   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:09:24.600001   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:24.611463   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:09:24.611643   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:24.611877   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.611903   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.627049   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1204 20:09:24.627459   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.627903   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.627928   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.628257   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.628473   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:09:24.629895   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:24.630233   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:24.630265   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:24.644758   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I1204 20:09:24.645209   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:24.645667   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:24.645685   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:24.645969   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:24.646125   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:24.646291   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.216
	I1204 20:09:24.646303   27912 certs.go:194] generating shared ca certs ...
	I1204 20:09:24.646316   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.646428   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:09:24.646465   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:09:24.646474   27912 certs.go:256] generating profile certs ...
	I1204 20:09:24.646544   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:09:24.646568   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e
	I1204 20:09:24.646583   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.254]
	I1204 20:09:24.766401   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e ...
	I1204 20:09:24.766431   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e: {Name:mkc714ddc3cd4c136e7a763dd7561d567af3f099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766597   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e ...
	I1204 20:09:24.766610   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e: {Name:mk0a2c7e9c0190313579e96374b5ec6b927ba043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:09:24.766678   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:09:24.766802   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.5b3a3f8e -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:09:24.766921   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:09:24.766936   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:09:24.766949   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:09:24.766968   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:09:24.766979   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:09:24.766989   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:09:24.767002   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:09:24.767010   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:09:24.767022   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:09:24.767067   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:09:24.767093   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:09:24.767102   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:09:24.767122   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:09:24.767144   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:09:24.767164   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:09:24.767200   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:09:24.767225   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:24.767238   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:09:24.767250   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:09:24.767278   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:24.770180   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770542   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:24.770570   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:24.770712   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:24.770891   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:24.771044   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:24.771172   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:24.847687   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:09:24.853685   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:09:24.865057   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:09:24.869198   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:09:24.885878   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:09:24.889805   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:09:24.902654   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:09:24.906786   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:09:24.918187   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:09:24.922192   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:09:24.934730   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:09:24.938712   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:09:24.950279   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:09:24.974079   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:09:24.996598   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:09:25.018605   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:09:25.040436   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 20:09:25.062496   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:09:25.083915   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:09:25.105243   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:09:25.126515   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:09:25.148104   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:09:25.169580   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:09:25.190929   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:09:25.206338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:09:25.221317   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:09:25.236210   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:09:25.251125   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:09:25.266383   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:09:25.281338   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:09:25.296542   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:09:25.302513   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:09:25.313596   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317903   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.317952   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:09:25.323324   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:09:25.334576   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:09:25.344350   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348476   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.348531   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:09:25.353851   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:09:25.364310   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:09:25.375701   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379775   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.379825   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:09:25.385241   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:09:25.395365   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:09:25.399560   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:09:25.399615   27912 kubeadm.go:934] updating node {m02 192.168.39.216 8443 v1.31.2 crio true true} ...
	I1204 20:09:25.399711   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:09:25.399742   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:09:25.399777   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:09:25.415868   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:09:25.415924   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:09:25.415967   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.424465   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:09:25.424517   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:09:25.433122   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:09:25.433145   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433195   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:09:25.433218   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 20:09:25.433242   27912 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 20:09:25.437081   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:09:25.437107   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:09:26.186226   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.186313   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:09:26.190746   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:09:26.190822   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:09:26.419618   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:09:26.443488   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.443611   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:09:26.450947   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:09:26.450982   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:09:26.739349   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:09:26.748265   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:09:26.764007   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:09:26.780904   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:09:26.797527   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:09:26.801091   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:09:26.811509   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:26.923723   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:26.939490   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:09:26.939813   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:09:26.939861   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:09:26.954842   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I1204 20:09:26.955355   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:09:26.955871   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:09:26.955897   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:09:26.956236   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:09:26.956453   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:09:26.956610   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:09:26.956705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:09:26.956726   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:09:26.959547   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.959914   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:09:26.959939   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:09:26.960071   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:09:26.960221   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:09:26.960358   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:09:26.960492   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:09:27.110244   27912 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:27.110295   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I1204 20:09:48.018604   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pq1xgw.4e78amhhenl1jnyw --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (20.908287309s)
	I1204 20:09:48.018634   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:09:48.626365   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m02 minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:09:48.747614   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:09:48.847766   27912 start.go:319] duration metric: took 21.891152638s to joinCluster
	I1204 20:09:48.847828   27912 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:09:48.848176   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:09:48.849095   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:09:48.850328   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:09:49.112006   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:09:49.157177   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:09:49.157538   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:09:49.157630   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:09:49.157883   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:09:49.158009   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.158021   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.158035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.158045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.168058   27912 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1204 20:09:49.658898   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:49.658922   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:49.658932   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:49.658943   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:49.667464   27912 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 20:09:50.158380   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.158399   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.158413   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.158419   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.171364   27912 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1204 20:09:50.658199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:50.658226   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:50.658233   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:50.658237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:50.663401   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:09:51.159112   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.159137   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.159148   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.159156   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.162480   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:51.163075   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:51.658265   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:51.658294   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:51.658304   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:51.658310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:51.661298   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:52.158591   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.158614   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.158623   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.158627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.161933   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:52.658479   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:52.658500   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:52.658508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:52.658513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:52.661537   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.158361   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.158384   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.158394   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.158402   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.161578   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:53.658404   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:53.658425   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:53.658433   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:53.658437   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:53.661364   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:53.662003   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:54.158610   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.158635   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.158645   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.158651   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.162217   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:54.658074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:54.658094   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:54.658102   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:54.658106   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:54.661918   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.158589   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.158611   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.158619   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.158624   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.161786   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.658906   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:55.658929   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:55.658937   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:55.658941   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:55.662357   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:55.663184   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:56.158490   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.158517   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.158528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.158533   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.258326   27912 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I1204 20:09:56.658232   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:56.658254   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:56.658264   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:56.658270   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:56.661245   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:09:57.158358   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.158380   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.158388   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.158392   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.162043   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:57.658188   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:57.658212   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:57.658223   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:57.658232   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:57.661717   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.158679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.158701   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.158708   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.158713   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.162634   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:58.163161   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:09:58.658856   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:58.658882   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:58.658900   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:58.658907   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:58.662596   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.158835   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.158862   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.158873   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.158880   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.162669   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:09:59.658183   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:09:59.658215   27912 round_trippers.go:469] Request Headers:
	I1204 20:09:59.658226   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:09:59.658231   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:09:59.661879   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.158851   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.158875   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.158883   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.158888   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.162790   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:00.163321   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:00.658562   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:00.658590   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:00.658601   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:00.658607   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:00.676721   27912 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 20:10:01.159007   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.159027   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.159035   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.159038   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.162909   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:01.658124   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:01.658161   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:01.658184   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:01.658188   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:01.662301   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:02.158692   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.158716   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.158727   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.158732   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.162067   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:02.659042   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:02.659064   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:02.659071   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:02.659075   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:02.661911   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:02.662581   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:03.159115   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.159145   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.159158   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.159165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.162607   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:03.658246   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:03.658270   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:03.658278   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:03.658282   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:03.661511   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.158942   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.158970   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.158979   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.158983   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.161958   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:04.658955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:04.658979   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:04.658987   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:04.658991   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:04.662295   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:04.662958   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:05.158173   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.158194   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.158203   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.158207   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.161194   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:05.658134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:05.658157   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:05.658165   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:05.658168   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:05.661616   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:06.158855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.158879   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.158887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.158891   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.164708   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:06.658461   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:06.658483   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:06.658491   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:06.658496   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:06.661810   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.158647   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.158674   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.158686   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.158690   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.161793   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:07.162345   27912 node_ready.go:53] node "ha-739930-m02" has status "Ready":"False"
	I1204 20:10:07.658727   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:07.658752   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:07.658760   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:07.658764   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:07.661982   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.158999   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.159025   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.159037   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.159043   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.162388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.162849   27912 node_ready.go:49] node "ha-739930-m02" has status "Ready":"True"
	I1204 20:10:08.162868   27912 node_ready.go:38] duration metric: took 19.004941155s for node "ha-739930-m02" to be "Ready" ...
	I1204 20:10:08.162878   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:08.162968   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:08.162977   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.162984   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.162987   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.167331   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:08.173856   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.173935   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:10:08.173944   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.173953   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.173958   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.176715   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.177374   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.177387   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.177395   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.177400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.179818   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.180446   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.180466   27912 pod_ready.go:82] duration metric: took 6.589083ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180478   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.180546   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:10:08.180556   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.180569   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.180577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.183177   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.183821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.183836   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.183842   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.183847   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.186093   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.186600   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.186617   27912 pod_ready.go:82] duration metric: took 6.131706ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186628   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.186691   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:10:08.186703   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.186713   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.186721   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.188940   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.189382   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.189398   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.189414   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.189420   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191367   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.191803   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.191818   27912 pod_ready.go:82] duration metric: took 5.18298ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191825   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.191870   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:10:08.191877   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.191884   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.191887   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.193844   27912 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 20:10:08.194287   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.194299   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.194306   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.194310   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.196400   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.196781   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.196797   27912 pod_ready.go:82] duration metric: took 4.966669ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.196810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.359125   27912 request.go:632] Waited for 162.263796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359211   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:10:08.359219   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.359230   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.359237   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.362569   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.559438   27912 request.go:632] Waited for 196.306856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559514   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:08.559519   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.559526   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.559534   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.562128   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:08.562664   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.562679   27912 pod_ready.go:82] duration metric: took 365.86397ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.562689   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.759755   27912 request.go:632] Waited for 197.00165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759821   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:10:08.759826   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.759834   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.759837   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.763106   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.959132   27912 request.go:632] Waited for 195.283542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959199   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:08.959204   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:08.959212   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:08.959216   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:08.962369   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:08.962948   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:08.962965   27912 pod_ready.go:82] duration metric: took 400.270135ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:08.962974   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.159437   27912 request.go:632] Waited for 196.391636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159487   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:10:09.159492   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.159502   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.159507   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.162708   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.359960   27912 request.go:632] Waited for 196.36752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360010   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:09.360014   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.360022   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.360026   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.362729   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:10:09.363473   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.363492   27912 pod_ready.go:82] duration metric: took 400.512945ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.363502   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.559607   27912 request.go:632] Waited for 196.045629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559663   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:10:09.559668   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.559676   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.559683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.563302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.759860   27912 request.go:632] Waited for 195.862174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759930   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:09.759935   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.759943   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.759949   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.762988   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:09.763689   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:09.763715   27912 pod_ready.go:82] duration metric: took 400.20496ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.763729   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:09.959738   27912 request.go:632] Waited for 195.93307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959807   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:10:09.959812   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:09.959819   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:09.959824   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:09.963156   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.159198   27912 request.go:632] Waited for 195.305905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159270   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:10.159275   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.159283   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.159286   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.162529   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.163056   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.163074   27912 pod_ready.go:82] duration metric: took 399.337655ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.163084   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.359093   27912 request.go:632] Waited for 195.949947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359150   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:10:10.359172   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.359182   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.359192   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.362392   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.559558   27912 request.go:632] Waited for 196.399776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559639   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.559653   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.559664   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.559670   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.564370   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:10.564877   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.564896   27912 pod_ready.go:82] duration metric: took 401.805669ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.564906   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.759943   27912 request.go:632] Waited for 194.973279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:10:10.760013   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.760021   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.760027   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.763726   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.959656   27912 request.go:632] Waited for 195.375986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959714   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:10:10.959719   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:10.959726   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:10.959731   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:10.963524   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:10.964360   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:10.964375   27912 pod_ready.go:82] duration metric: took 399.464088ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:10.964389   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.159456   27912 request.go:632] Waited for 194.987845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159527   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:10:11.159532   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.159539   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.159543   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.163395   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.359362   27912 request.go:632] Waited for 195.347282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359439   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:10:11.359446   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.359458   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.359467   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.362635   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:11.363122   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:10:11.363138   27912 pod_ready.go:82] duration metric: took 398.74121ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:10:11.363148   27912 pod_ready.go:39] duration metric: took 3.200239096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:10:11.363164   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:10:11.363207   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:10:11.377015   27912 api_server.go:72] duration metric: took 22.529160197s to wait for apiserver process to appear ...
	I1204 20:10:11.377034   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:10:11.377052   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:10:11.380929   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:10:11.380976   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:10:11.380983   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.380999   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.381003   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.381838   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:10:11.381917   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:10:11.381931   27912 api_server.go:131] duration metric: took 4.890825ms to wait for apiserver health ...
	I1204 20:10:11.381937   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:10:11.559327   27912 request.go:632] Waited for 177.330525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.559495   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.559519   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.559528   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.566679   27912 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 20:10:11.572558   27912 system_pods.go:59] 17 kube-system pods found
	I1204 20:10:11.572586   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.572592   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.572597   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.572600   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.572604   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.572607   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.572612   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.572617   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.572623   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.572628   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.572635   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.572641   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.572646   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.572651   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.572655   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.572658   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.572661   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.572670   27912 system_pods.go:74] duration metric: took 190.727819ms to wait for pod list to return data ...
	I1204 20:10:11.572678   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:10:11.759027   27912 request.go:632] Waited for 186.27116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759095   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:10:11.759100   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.759108   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.759113   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.763664   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:10:11.763867   27912 default_sa.go:45] found service account: "default"
	I1204 20:10:11.763882   27912 default_sa.go:55] duration metric: took 191.195892ms for default service account to be created ...
	I1204 20:10:11.763890   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:10:11.959431   27912 request.go:632] Waited for 195.47766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959540   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:10:11.959553   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:11.959560   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:11.959566   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:11.965051   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:10:11.970022   27912 system_pods.go:86] 17 kube-system pods found
	I1204 20:10:11.970046   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:10:11.970051   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:10:11.970055   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:10:11.970059   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:10:11.970067   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:10:11.970071   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:10:11.970074   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:10:11.970078   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:10:11.970082   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:10:11.970088   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:10:11.970091   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:10:11.970095   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:10:11.970098   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:10:11.970100   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:10:11.970103   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:10:11.970106   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:10:11.970114   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:10:11.970124   27912 system_pods.go:126] duration metric: took 206.228874ms to wait for k8s-apps to be running ...
	I1204 20:10:11.970130   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:10:11.970170   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:11.984252   27912 system_svc.go:56] duration metric: took 14.113655ms WaitForService to wait for kubelet
	I1204 20:10:11.984285   27912 kubeadm.go:582] duration metric: took 23.13642897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:10:11.984305   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:10:12.159992   27912 request.go:632] Waited for 175.622844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:10:12.160081   27912 round_trippers.go:469] Request Headers:
	I1204 20:10:12.160088   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:10:12.160092   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:10:12.163352   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:10:12.164036   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164057   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164070   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:10:12.164075   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:10:12.164081   27912 node_conditions.go:105] duration metric: took 179.770433ms to run NodePressure ...
	I1204 20:10:12.164096   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:10:12.164129   27912 start.go:255] writing updated cluster config ...
	I1204 20:10:12.166221   27912 out.go:201] 
	I1204 20:10:12.167682   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:12.167793   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.169433   27912 out.go:177] * Starting "ha-739930-m03" control-plane node in "ha-739930" cluster
	I1204 20:10:12.170619   27912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:10:12.170641   27912 cache.go:56] Caching tarball of preloaded images
	I1204 20:10:12.170743   27912 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:10:12.170758   27912 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:10:12.170867   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:12.171047   27912 start.go:360] acquireMachinesLock for ha-739930-m03: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:10:12.171095   27912 start.go:364] duration metric: took 28.989µs to acquireMachinesLock for "ha-739930-m03"
	I1204 20:10:12.171119   27912 start.go:93] Provisioning new machine with config: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:12.171232   27912 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 20:10:12.172689   27912 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:10:12.172776   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:12.172819   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:12.188562   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1204 20:10:12.189008   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:12.189520   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:12.189541   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:12.189894   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:12.190074   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:12.190188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:12.190394   27912 start.go:159] libmachine.API.Create for "ha-739930" (driver="kvm2")
	I1204 20:10:12.190426   27912 client.go:168] LocalClient.Create starting
	I1204 20:10:12.190471   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:10:12.190508   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190530   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190598   27912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:10:12.190629   27912 main.go:141] libmachine: Decoding PEM data...
	I1204 20:10:12.190652   27912 main.go:141] libmachine: Parsing certificate...
	I1204 20:10:12.190679   27912 main.go:141] libmachine: Running pre-create checks...
	I1204 20:10:12.190691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .PreCreateCheck
	I1204 20:10:12.190909   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:12.191309   27912 main.go:141] libmachine: Creating machine...
	I1204 20:10:12.191322   27912 main.go:141] libmachine: (ha-739930-m03) Calling .Create
	I1204 20:10:12.191476   27912 main.go:141] libmachine: (ha-739930-m03) Creating KVM machine...
	I1204 20:10:12.192652   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing default KVM network
	I1204 20:10:12.192779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found existing private KVM network mk-ha-739930
	I1204 20:10:12.192908   27912 main.go:141] libmachine: (ha-739930-m03) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.192934   27912 main.go:141] libmachine: (ha-739930-m03) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:10:12.192988   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.192887   28697 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.193089   27912 main.go:141] libmachine: (ha-739930-m03) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:10:12.422847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.422708   28697 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa...
	I1204 20:10:12.571024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.570898   28697 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk...
	I1204 20:10:12.571065   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing magic tar header
	I1204 20:10:12.571083   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Writing SSH key tar header
	I1204 20:10:12.571096   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:12.571045   28697 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 ...
	I1204 20:10:12.571246   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03
	I1204 20:10:12.571291   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03 (perms=drwx------)
	I1204 20:10:12.571302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:10:12.571314   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:10:12.571323   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:10:12.571331   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:10:12.571339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:10:12.571346   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Checking permissions on dir: /home
	I1204 20:10:12.571354   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Skipping /home - not owner
	I1204 20:10:12.571391   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:10:12.571415   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:10:12.571432   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:10:12.571447   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:10:12.571458   27912 main.go:141] libmachine: (ha-739930-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:10:12.571477   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:12.572409   27912 main.go:141] libmachine: (ha-739930-m03) define libvirt domain using xml: 
	I1204 20:10:12.572438   27912 main.go:141] libmachine: (ha-739930-m03) <domain type='kvm'>
	I1204 20:10:12.572449   27912 main.go:141] libmachine: (ha-739930-m03)   <name>ha-739930-m03</name>
	I1204 20:10:12.572461   27912 main.go:141] libmachine: (ha-739930-m03)   <memory unit='MiB'>2200</memory>
	I1204 20:10:12.572474   27912 main.go:141] libmachine: (ha-739930-m03)   <vcpu>2</vcpu>
	I1204 20:10:12.572480   27912 main.go:141] libmachine: (ha-739930-m03)   <features>
	I1204 20:10:12.572490   27912 main.go:141] libmachine: (ha-739930-m03)     <acpi/>
	I1204 20:10:12.572496   27912 main.go:141] libmachine: (ha-739930-m03)     <apic/>
	I1204 20:10:12.572505   27912 main.go:141] libmachine: (ha-739930-m03)     <pae/>
	I1204 20:10:12.572511   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572522   27912 main.go:141] libmachine: (ha-739930-m03)   </features>
	I1204 20:10:12.572529   27912 main.go:141] libmachine: (ha-739930-m03)   <cpu mode='host-passthrough'>
	I1204 20:10:12.572539   27912 main.go:141] libmachine: (ha-739930-m03)   
	I1204 20:10:12.572549   27912 main.go:141] libmachine: (ha-739930-m03)   </cpu>
	I1204 20:10:12.572577   27912 main.go:141] libmachine: (ha-739930-m03)   <os>
	I1204 20:10:12.572599   27912 main.go:141] libmachine: (ha-739930-m03)     <type>hvm</type>
	I1204 20:10:12.572612   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='cdrom'/>
	I1204 20:10:12.572622   27912 main.go:141] libmachine: (ha-739930-m03)     <boot dev='hd'/>
	I1204 20:10:12.572630   27912 main.go:141] libmachine: (ha-739930-m03)     <bootmenu enable='no'/>
	I1204 20:10:12.572640   27912 main.go:141] libmachine: (ha-739930-m03)   </os>
	I1204 20:10:12.572648   27912 main.go:141] libmachine: (ha-739930-m03)   <devices>
	I1204 20:10:12.572659   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='cdrom'>
	I1204 20:10:12.572673   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/boot2docker.iso'/>
	I1204 20:10:12.572688   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hdc' bus='scsi'/>
	I1204 20:10:12.572708   27912 main.go:141] libmachine: (ha-739930-m03)       <readonly/>
	I1204 20:10:12.572721   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572747   27912 main.go:141] libmachine: (ha-739930-m03)     <disk type='file' device='disk'>
	I1204 20:10:12.572758   27912 main.go:141] libmachine: (ha-739930-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:10:12.572766   27912 main.go:141] libmachine: (ha-739930-m03)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/ha-739930-m03.rawdisk'/>
	I1204 20:10:12.572780   27912 main.go:141] libmachine: (ha-739930-m03)       <target dev='hda' bus='virtio'/>
	I1204 20:10:12.572788   27912 main.go:141] libmachine: (ha-739930-m03)     </disk>
	I1204 20:10:12.572792   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572798   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='mk-ha-739930'/>
	I1204 20:10:12.572802   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572807   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572814   27912 main.go:141] libmachine: (ha-739930-m03)     <interface type='network'>
	I1204 20:10:12.572819   27912 main.go:141] libmachine: (ha-739930-m03)       <source network='default'/>
	I1204 20:10:12.572825   27912 main.go:141] libmachine: (ha-739930-m03)       <model type='virtio'/>
	I1204 20:10:12.572842   27912 main.go:141] libmachine: (ha-739930-m03)     </interface>
	I1204 20:10:12.572860   27912 main.go:141] libmachine: (ha-739930-m03)     <serial type='pty'>
	I1204 20:10:12.572872   27912 main.go:141] libmachine: (ha-739930-m03)       <target port='0'/>
	I1204 20:10:12.572883   27912 main.go:141] libmachine: (ha-739930-m03)     </serial>
	I1204 20:10:12.572904   27912 main.go:141] libmachine: (ha-739930-m03)     <console type='pty'>
	I1204 20:10:12.572914   27912 main.go:141] libmachine: (ha-739930-m03)       <target type='serial' port='0'/>
	I1204 20:10:12.572922   27912 main.go:141] libmachine: (ha-739930-m03)     </console>
	I1204 20:10:12.572932   27912 main.go:141] libmachine: (ha-739930-m03)     <rng model='virtio'>
	I1204 20:10:12.572945   27912 main.go:141] libmachine: (ha-739930-m03)       <backend model='random'>/dev/random</backend>
	I1204 20:10:12.572957   27912 main.go:141] libmachine: (ha-739930-m03)     </rng>
	I1204 20:10:12.572965   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572973   27912 main.go:141] libmachine: (ha-739930-m03)     
	I1204 20:10:12.572983   27912 main.go:141] libmachine: (ha-739930-m03)   </devices>
	I1204 20:10:12.572991   27912 main.go:141] libmachine: (ha-739930-m03) </domain>
	I1204 20:10:12.572996   27912 main.go:141] libmachine: (ha-739930-m03) 
	I1204 20:10:12.580033   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:71:b7:c8 in network default
	I1204 20:10:12.580713   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring networks are active...
	I1204 20:10:12.580737   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:12.581680   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network default is active
	I1204 20:10:12.582031   27912 main.go:141] libmachine: (ha-739930-m03) Ensuring network mk-ha-739930 is active
	I1204 20:10:12.582464   27912 main.go:141] libmachine: (ha-739930-m03) Getting domain xml...
	I1204 20:10:12.583287   27912 main.go:141] libmachine: (ha-739930-m03) Creating domain...
	I1204 20:10:13.809969   27912 main.go:141] libmachine: (ha-739930-m03) Waiting to get IP...
	I1204 20:10:13.810804   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:13.811158   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:13.811215   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:13.811149   28697 retry.go:31] will retry after 211.474142ms: waiting for machine to come up
	I1204 20:10:14.024550   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.024996   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.025024   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.024958   28697 retry.go:31] will retry after 355.071975ms: waiting for machine to come up
	I1204 20:10:14.381391   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.381825   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.381857   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.381781   28697 retry.go:31] will retry after 319.974042ms: waiting for machine to come up
	I1204 20:10:14.703466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:14.703910   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:14.703951   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:14.703877   28697 retry.go:31] will retry after 609.562735ms: waiting for machine to come up
	I1204 20:10:15.314561   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.315069   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.315101   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.315013   28697 retry.go:31] will retry after 486.973077ms: waiting for machine to come up
	I1204 20:10:15.803653   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:15.804185   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:15.804213   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:15.804126   28697 retry.go:31] will retry after 675.766149ms: waiting for machine to come up
	I1204 20:10:16.481967   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:16.482459   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:16.482489   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:16.482406   28697 retry.go:31] will retry after 1.174103834s: waiting for machine to come up
	I1204 20:10:17.658189   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:17.658580   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:17.658608   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:17.658533   28697 retry.go:31] will retry after 1.454065165s: waiting for machine to come up
	I1204 20:10:19.114276   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:19.114810   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:19.114839   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:19.114726   28697 retry.go:31] will retry after 1.181631433s: waiting for machine to come up
	I1204 20:10:20.297423   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:20.297826   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:20.297856   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:20.297775   28697 retry.go:31] will retry after 1.797113318s: waiting for machine to come up
	I1204 20:10:22.096493   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:22.096936   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:22.096963   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:22.096891   28697 retry.go:31] will retry after 2.640330643s: waiting for machine to come up
	I1204 20:10:24.740014   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:24.740549   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:24.740589   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:24.740509   28697 retry.go:31] will retry after 3.427854139s: waiting for machine to come up
	I1204 20:10:28.170039   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:28.170450   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:28.170480   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:28.170413   28697 retry.go:31] will retry after 3.100818386s: waiting for machine to come up
	I1204 20:10:31.273778   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:31.274339   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find current IP address of domain ha-739930-m03 in network mk-ha-739930
	I1204 20:10:31.274370   27912 main.go:141] libmachine: (ha-739930-m03) DBG | I1204 20:10:31.274261   28697 retry.go:31] will retry after 5.17411421s: waiting for machine to come up
	I1204 20:10:36.453055   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453514   27912 main.go:141] libmachine: (ha-739930-m03) Found IP for machine: 192.168.39.176
	I1204 20:10:36.453546   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has current primary IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.453554   27912 main.go:141] libmachine: (ha-739930-m03) Reserving static IP address...
	I1204 20:10:36.453982   27912 main.go:141] libmachine: (ha-739930-m03) DBG | unable to find host DHCP lease matching {name: "ha-739930-m03", mac: "52:54:00:8f:55:42", ip: "192.168.39.176"} in network mk-ha-739930
	I1204 20:10:36.527779   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Getting to WaitForSSH function...
	I1204 20:10:36.527812   27912 main.go:141] libmachine: (ha-739930-m03) Reserved static IP address: 192.168.39.176
	I1204 20:10:36.527825   27912 main.go:141] libmachine: (ha-739930-m03) Waiting for SSH to be available...
	I1204 20:10:36.530460   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.530890   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.530918   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.531105   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH client type: external
	I1204 20:10:36.531134   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa (-rw-------)
	I1204 20:10:36.531171   27912 main.go:141] libmachine: (ha-739930-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:10:36.531193   27912 main.go:141] libmachine: (ha-739930-m03) DBG | About to run SSH command:
	I1204 20:10:36.531210   27912 main.go:141] libmachine: (ha-739930-m03) DBG | exit 0
	I1204 20:10:36.659229   27912 main.go:141] libmachine: (ha-739930-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 20:10:36.659536   27912 main.go:141] libmachine: (ha-739930-m03) KVM machine creation complete!
	I1204 20:10:36.659863   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:36.660403   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660622   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:36.660802   27912 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:10:36.660816   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetState
	I1204 20:10:36.662148   27912 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:10:36.662160   27912 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:10:36.662181   27912 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:10:36.662187   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.664336   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664681   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.664694   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.664829   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.664988   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665140   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.665284   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.665446   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.665639   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.665651   27912 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:10:36.774558   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:36.774575   27912 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:10:36.774582   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.777253   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777655   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.777682   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.777862   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.778048   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778224   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.778333   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.778478   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.778662   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.778673   27912 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:10:36.891601   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:10:36.891668   27912 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:10:36.891681   27912 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:10:36.891691   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.891891   27912 buildroot.go:166] provisioning hostname "ha-739930-m03"
	I1204 20:10:36.891918   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:36.892100   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:36.894477   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.894866   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:36.894903   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:36.895026   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:36.895181   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895327   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:36.895457   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:36.895582   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:36.895780   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:36.895798   27912 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930-m03 && echo "ha-739930-m03" | sudo tee /etc/hostname
	I1204 20:10:37.022149   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930-m03
	
	I1204 20:10:37.022188   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.024859   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025302   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.025324   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.025555   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.025739   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.025923   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.026044   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.026196   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.026355   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.026371   27912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:10:37.143730   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:10:37.143754   27912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:10:37.143777   27912 buildroot.go:174] setting up certificates
	I1204 20:10:37.143788   27912 provision.go:84] configureAuth start
	I1204 20:10:37.143795   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetMachineName
	I1204 20:10:37.144053   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:37.146742   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147064   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.147095   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.147234   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.149352   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149692   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.149719   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.149832   27912 provision.go:143] copyHostCerts
	I1204 20:10:37.149875   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.149914   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:10:37.149926   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:10:37.150010   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:10:37.150120   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150164   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:10:37.150175   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:10:37.150216   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:10:37.150301   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150325   27912 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:10:37.150331   27912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:10:37.150367   27912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:10:37.150468   27912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930-m03 san=[127.0.0.1 192.168.39.176 ha-739930-m03 localhost minikube]
	I1204 20:10:37.504595   27912 provision.go:177] copyRemoteCerts
	I1204 20:10:37.504652   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:10:37.504676   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.507572   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.507995   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.508023   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.508251   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.508469   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.508628   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.508752   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:37.592737   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:10:37.592815   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:10:37.614702   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:10:37.614759   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 20:10:37.636793   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:10:37.636856   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 20:10:37.657514   27912 provision.go:87] duration metric: took 513.715697ms to configureAuth
	I1204 20:10:37.657537   27912 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:10:37.657776   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:37.657846   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.660375   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660716   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.660743   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.660915   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.661101   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661283   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.661394   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.661530   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:37.661715   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:37.661731   27912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:10:37.909620   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:10:37.909653   27912 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:10:37.909661   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetURL
	I1204 20:10:37.911012   27912 main.go:141] libmachine: (ha-739930-m03) DBG | Using libvirt version 6000000
	I1204 20:10:37.913430   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913836   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.913865   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.913996   27912 main.go:141] libmachine: Docker is up and running!
	I1204 20:10:37.914009   27912 main.go:141] libmachine: Reticulating splines...
	I1204 20:10:37.914014   27912 client.go:171] duration metric: took 25.723578899s to LocalClient.Create
	I1204 20:10:37.914034   27912 start.go:167] duration metric: took 25.723643031s to libmachine.API.Create "ha-739930"
	I1204 20:10:37.914045   27912 start.go:293] postStartSetup for "ha-739930-m03" (driver="kvm2")
	I1204 20:10:37.914058   27912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:10:37.914082   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:37.914308   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:10:37.914329   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:37.916698   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917013   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:37.917037   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:37.917163   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:37.917355   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:37.917507   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:37.917647   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.000720   27912 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:10:38.004659   27912 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:10:38.004677   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:10:38.004732   27912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:10:38.004797   27912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:10:38.004805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:10:38.004881   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:10:38.014138   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:38.035007   27912 start.go:296] duration metric: took 120.952939ms for postStartSetup
	I1204 20:10:38.035043   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetConfigRaw
	I1204 20:10:38.035625   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.038045   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038404   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.038431   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.038707   27912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:10:38.038928   27912 start.go:128] duration metric: took 25.86768393s to createHost
	I1204 20:10:38.038955   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.040921   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041241   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.041260   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.041384   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.041567   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041725   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.041870   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.042033   27912 main.go:141] libmachine: Using SSH client type: native
	I1204 20:10:38.042234   27912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1204 20:10:38.042247   27912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:10:38.147467   27912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343038.125898138
	
	I1204 20:10:38.147487   27912 fix.go:216] guest clock: 1733343038.125898138
	I1204 20:10:38.147494   27912 fix.go:229] Guest: 2024-12-04 20:10:38.125898138 +0000 UTC Remote: 2024-12-04 20:10:38.038942767 +0000 UTC m=+146.136314147 (delta=86.955371ms)
	I1204 20:10:38.147507   27912 fix.go:200] guest clock delta is within tolerance: 86.955371ms
	I1204 20:10:38.147511   27912 start.go:83] releasing machines lock for "ha-739930-m03", held for 25.976405222s
	I1204 20:10:38.147527   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.147758   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:38.150388   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.150780   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.150809   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.153038   27912 out.go:177] * Found network options:
	I1204 20:10:38.154623   27912 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.216
	W1204 20:10:38.155949   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.155970   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.155981   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156494   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156668   27912 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:10:38.156762   27912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:10:38.156817   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	W1204 20:10:38.156874   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 20:10:38.156896   27912 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 20:10:38.156981   27912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:10:38.157003   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:10:38.159414   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159669   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159823   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.159847   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.159966   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160094   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:38.160122   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:38.160127   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160279   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:10:38.160293   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160410   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.160424   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:10:38.160525   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:10:38.160650   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:10:38.394150   27912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:10:38.401145   27912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:10:38.401209   27912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:10:38.417195   27912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:10:38.417223   27912 start.go:495] detecting cgroup driver to use...
	I1204 20:10:38.417296   27912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:10:38.435131   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:10:38.448563   27912 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:10:38.448618   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:10:38.461725   27912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:10:38.474727   27912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:10:38.588798   27912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:10:38.745587   27912 docker.go:233] disabling docker service ...
	I1204 20:10:38.745653   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:10:38.759235   27912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:10:38.771608   27912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:10:38.877832   27912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:10:38.982502   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:10:38.995491   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:10:39.012043   27912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:10:39.012100   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.021299   27912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:10:39.021358   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.030541   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.039631   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.048551   27912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:10:39.058773   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.068061   27912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.083733   27912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:10:39.092600   27912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:10:39.101297   27912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:10:39.101340   27912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:10:39.113156   27912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:10:39.122303   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:39.227598   27912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:10:39.312250   27912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:10:39.312323   27912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:10:39.316600   27912 start.go:563] Will wait 60s for crictl version
	I1204 20:10:39.316650   27912 ssh_runner.go:195] Run: which crictl
	I1204 20:10:39.320258   27912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:10:39.357732   27912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:10:39.357795   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.390225   27912 ssh_runner.go:195] Run: crio --version
	I1204 20:10:39.419008   27912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:10:39.420400   27912 out.go:177]   - env NO_PROXY=192.168.39.183
	I1204 20:10:39.421790   27912 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.216
	I1204 20:10:39.423169   27912 main.go:141] libmachine: (ha-739930-m03) Calling .GetIP
	I1204 20:10:39.425979   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426437   27912 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:10:39.426466   27912 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:10:39.426672   27912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:10:39.431086   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:39.443488   27912 mustload.go:65] Loading cluster: ha-739930
	I1204 20:10:39.443719   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:10:39.443987   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.444059   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.459062   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I1204 20:10:39.459454   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.459962   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.459982   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.460287   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.460468   27912 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:10:39.462100   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:39.462434   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:39.462472   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:39.476580   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I1204 20:10:39.476947   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:39.477280   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:39.477302   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:39.477596   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:39.477759   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:39.477901   27912 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.176
	I1204 20:10:39.477913   27912 certs.go:194] generating shared ca certs ...
	I1204 20:10:39.477926   27912 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.478032   27912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:10:39.478067   27912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:10:39.478076   27912 certs.go:256] generating profile certs ...
	I1204 20:10:39.478140   27912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:10:39.478162   27912 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8
	I1204 20:10:39.478183   27912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:10:39.647686   27912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 ...
	I1204 20:10:39.647712   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8: {Name:mka45902bb26beb0e72f217dc87741ab3309d928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.647887   27912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 ...
	I1204 20:10:39.647910   27912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8: {Name:mk0280d80935ba52cb98acc5d6236d25a3a3095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:10:39.648008   27912 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:10:39.648187   27912 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.58072db8 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:10:39.648361   27912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:10:39.648383   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:10:39.648403   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:10:39.648422   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:10:39.648440   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:10:39.648458   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:10:39.648475   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:10:39.648493   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:10:39.663476   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:10:39.663545   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:10:39.663584   27912 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:10:39.663595   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:10:39.663616   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:10:39.663649   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:10:39.663681   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:10:39.663737   27912 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:10:39.663769   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:39.663786   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:10:39.663805   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:10:39.663843   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:39.666431   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666764   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:39.666781   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:39.666946   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:39.667122   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:39.667283   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:39.667442   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:39.739814   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 20:10:39.744522   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 20:10:39.755922   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 20:10:39.759927   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 20:10:39.770702   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 20:10:39.775183   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 20:10:39.787784   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 20:10:39.792674   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 20:10:39.805368   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 20:10:39.809503   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 20:10:39.828088   27912 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 20:10:39.832824   27912 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 20:10:39.844859   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:10:39.869334   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:10:39.893785   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:10:39.916818   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:10:39.939176   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 20:10:39.961163   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:10:39.983006   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:10:40.005681   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:10:40.028546   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:10:40.051809   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:10:40.074413   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:10:40.097808   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 20:10:40.113924   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 20:10:40.131147   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 20:10:40.149216   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 20:10:40.166655   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 20:10:40.182489   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 20:10:40.200001   27912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 20:10:40.221223   27912 ssh_runner.go:195] Run: openssl version
	I1204 20:10:40.226405   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:10:40.235863   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239603   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.239672   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:10:40.245186   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:10:40.256188   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:10:40.266724   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271086   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.271119   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:10:40.276304   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:10:40.286222   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:10:40.297060   27912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301192   27912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.301236   27912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:10:40.307282   27912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:10:40.317487   27912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:10:40.320982   27912 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:10:40.321045   27912 kubeadm.go:934] updating node {m03 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1204 20:10:40.321144   27912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:10:40.321175   27912 kube-vip.go:115] generating kube-vip config ...
	I1204 20:10:40.321208   27912 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:10:40.335360   27912 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:10:40.335431   27912 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:10:40.335468   27912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.344356   27912 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 20:10:40.344387   27912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 20:10:40.352481   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 20:10:40.352490   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 20:10:40.352500   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352520   27912 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 20:10:40.352529   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:10:40.352538   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.352555   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 20:10:40.352614   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 20:10:40.357211   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 20:10:40.357232   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 20:10:40.373861   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 20:10:40.373888   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 20:10:40.393917   27912 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.394019   27912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 20:10:40.435438   27912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 20:10:40.435480   27912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 20:10:41.204864   27912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 20:10:41.214084   27912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 20:10:41.230130   27912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:10:41.245590   27912 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:10:41.261184   27912 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:10:41.264917   27912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:10:41.276834   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:10:41.407860   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:10:41.425834   27912 host.go:66] Checking if "ha-739930" exists ...
	I1204 20:10:41.426358   27912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:10:41.426432   27912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:10:41.444259   27912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I1204 20:10:41.444841   27912 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:10:41.445793   27912 main.go:141] libmachine: Using API Version  1
	I1204 20:10:41.445819   27912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:10:41.446152   27912 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:10:41.446372   27912 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:10:41.446554   27912 start.go:317] joinCluster: &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:10:41.446705   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 20:10:41.446730   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:10:41.449938   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450354   27912 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:10:41.450382   27912 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:10:41.450525   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:10:41.450704   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:10:41.450893   27912 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:10:41.451051   27912 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:10:41.603198   27912 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:10:41.603245   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443"
	I1204 20:11:02.285051   27912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rsc6s7.pvvve9xxbfoucm3c --discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-739930-m03 --control-plane --apiserver-advertise-address=192.168.39.176 --apiserver-bind-port=8443": (20.681780468s)
	I1204 20:11:02.285099   27912 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 20:11:02.929343   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-739930-m03 minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=ha-739930 minikube.k8s.io/primary=false
	I1204 20:11:03.053541   27912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-739930-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 20:11:03.177213   27912 start.go:319] duration metric: took 21.7306554s to joinCluster
	I1204 20:11:03.177299   27912 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:11:03.177647   27912 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:11:03.178583   27912 out.go:177] * Verifying Kubernetes components...
	I1204 20:11:03.179869   27912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:11:03.436285   27912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:11:03.491544   27912 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:11:03.491892   27912 kapi.go:59] client config for ha-739930: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 20:11:03.491978   27912 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1204 20:11:03.492270   27912 node_ready.go:35] waiting up to 6m0s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:03.492369   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.492380   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.492391   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.492400   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.496740   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:03.992695   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:03.992717   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:03.992725   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:03.992729   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:03.996010   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.493230   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.493265   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.493272   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.496716   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:04.992539   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:04.992561   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:04.992571   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:04.992577   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:04.995936   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:05.493273   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.493300   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.493311   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.493317   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.497413   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:05.497897   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:05.993362   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:05.993385   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:05.993392   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:05.993397   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:05.996675   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.492587   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.492610   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.492620   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.492627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.495773   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:06.993310   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:06.993331   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:06.993339   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:06.993343   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:06.996864   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.492704   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.492741   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.492750   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.492754   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.496418   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.993375   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:07.993397   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:07.993404   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:07.993414   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:07.996601   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:07.997248   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:08.492707   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.492739   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.492752   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.492757   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.498736   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:08.992522   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:08.992546   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:08.992554   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:08.992559   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:08.996681   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:09.492442   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.492462   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.492470   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.492475   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.496143   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:09.992900   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:09.992932   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:09.992939   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:09.992944   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:09.996453   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.492481   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.492499   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.492507   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.492513   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.496234   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:10.497174   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:10.992502   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:10.992525   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:10.992532   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:10.992553   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:10.995639   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.493014   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.493034   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.493042   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.493045   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.496066   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:11.992460   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:11.992481   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:11.992488   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:11.992492   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:11.995782   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.492536   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.492559   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.492567   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.492575   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.496512   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.993486   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:12.993507   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:12.993515   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:12.993521   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:12.996929   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:12.997503   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:13.492705   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.492728   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.492735   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.492739   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.495958   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:13.993195   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:13.993235   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:13.993243   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:13.993248   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:13.996458   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:14.492667   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.492687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.492695   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.492700   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.496760   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:14.992634   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:14.992657   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:14.992665   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:14.992668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:14.996174   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.492623   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.492645   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.492651   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.492656   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.496189   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:15.496993   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:15.993412   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:15.993432   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:15.993438   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:15.993442   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:15.996343   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:16.492477   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.492500   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.492508   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.492512   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.495796   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:16.993504   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:16.993533   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:16.993545   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:16.993552   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:16.996589   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.492614   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.492637   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.492649   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.492654   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.496032   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.992928   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:17.992951   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:17.992958   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:17.992961   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:17.996749   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:17.997385   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:18.492596   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.492617   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.492625   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.492629   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.495562   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:18.992579   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:18.992604   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:18.992612   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:18.992616   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:18.996070   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.493093   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.493113   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.493121   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.493126   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.992762   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:19.992788   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:19.992796   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:19.992802   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:19.996757   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:19.997645   27912 node_ready.go:53] node "ha-739930-m03" has status "Ready":"False"
	I1204 20:11:20.493018   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.493038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.493045   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.493049   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.496165   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:20.993181   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:20.993203   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:20.993211   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:20.993214   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:20.996266   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.493006   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.493035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.493044   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.493050   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.496694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.497703   27912 node_ready.go:49] node "ha-739930-m03" has status "Ready":"True"
	I1204 20:11:21.497723   27912 node_ready.go:38] duration metric: took 18.005431822s for node "ha-739930-m03" to be "Ready" ...
	I1204 20:11:21.497731   27912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:21.497795   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:21.497804   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.497811   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.497815   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.504465   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:21.510955   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.511029   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7kbgr
	I1204 20:11:21.511038   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.511050   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.511058   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.514034   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.514600   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.514614   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.514622   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.514627   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.517241   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.517672   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.517688   27912 pod_ready.go:82] duration metric: took 6.709809ms for pod "coredns-7c65d6cfc9-7kbgr" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517707   27912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.517765   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8kztf
	I1204 20:11:21.517772   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.517781   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.517791   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.520563   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.521278   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.521296   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.521307   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.521313   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.523869   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.524405   27912 pod_ready.go:93] pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.524426   27912 pod_ready.go:82] duration metric: took 6.708809ms for pod "coredns-7c65d6cfc9-8kztf" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524435   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.524489   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930
	I1204 20:11:21.524498   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.524504   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.524510   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.526682   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.527365   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:21.527393   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.527401   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.527410   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530023   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.530721   27912 pod_ready.go:93] pod "etcd-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.530744   27912 pod_ready.go:82] duration metric: took 6.30261ms for pod "etcd-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530758   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.530832   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m02
	I1204 20:11:21.530844   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.530856   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.530866   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.533485   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.534074   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:21.534089   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.534098   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.534104   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.536315   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:21.536771   27912 pod_ready.go:93] pod "etcd-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.536789   27912 pod_ready.go:82] duration metric: took 6.023339ms for pod "etcd-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.536798   27912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.693086   27912 request.go:632] Waited for 156.229013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693178   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-739930-m03
	I1204 20:11:21.693187   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.693199   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.693211   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.696805   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.893066   27912 request.go:632] Waited for 195.292666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893122   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:21.893140   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:21.893148   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:21.893151   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:21.896289   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:21.896776   27912 pod_ready.go:93] pod "etcd-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:21.896798   27912 pod_ready.go:82] duration metric: took 359.993172ms for pod "etcd-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:21.896822   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.094080   27912 request.go:632] Waited for 197.155628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094159   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930
	I1204 20:11:22.094178   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.094195   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.094201   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.097388   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.293809   27912 request.go:632] Waited for 194.988533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293864   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:22.293871   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.293881   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.293886   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.297036   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.297688   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.297708   27912 pod_ready.go:82] duration metric: took 400.873563ms for pod "kube-apiserver-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.297721   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.493772   27912 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493834   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m02
	I1204 20:11:22.493840   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.493847   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.493850   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.497525   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.693745   27912 request.go:632] Waited for 195.318737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693830   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:22.693837   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.693844   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.693849   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.697438   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:22.697941   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:22.697959   27912 pod_ready.go:82] duration metric: took 400.231011ms for pod "kube-apiserver-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.697969   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:22.894031   27912 request.go:632] Waited for 195.997225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894100   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-739930-m03
	I1204 20:11:22.894105   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:22.894113   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:22.894119   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:22.896928   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.093056   27912 request.go:632] Waited for 195.290507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093109   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:23.093116   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.093125   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.093131   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.096071   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:23.096675   27912 pod_ready.go:93] pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.096695   27912 pod_ready.go:82] duration metric: took 398.72057ms for pod "kube-apiserver-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.096706   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.293761   27912 request.go:632] Waited for 196.979038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293857   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930
	I1204 20:11:23.293863   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.293870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.293877   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.297313   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.493595   27912 request.go:632] Waited for 195.358893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493645   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:23.493652   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.493662   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.493668   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.496860   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.497431   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.497447   27912 pod_ready.go:82] duration metric: took 400.733171ms for pod "kube-controller-manager-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.497457   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.693609   27912 request.go:632] Waited for 196.087422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693665   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m02
	I1204 20:11:23.693670   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.693677   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.693681   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.697816   27912 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 20:11:23.893073   27912 request.go:632] Waited for 194.284611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893134   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:23.893157   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:23.893173   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:23.893179   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:23.896273   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:23.896905   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:23.896921   27912 pod_ready.go:82] duration metric: took 399.455915ms for pod "kube-controller-manager-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:23.896931   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.094047   27912 request.go:632] Waited for 197.05537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094114   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-739930-m03
	I1204 20:11:24.094120   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.094128   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.094138   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.097347   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.293333   27912 request.go:632] Waited for 195.221509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293408   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:24.293418   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.293429   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.293439   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.296348   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:24.296803   27912 pod_ready.go:93] pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.296819   27912 pod_ready.go:82] duration metric: took 399.882093ms for pod "kube-controller-manager-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.296828   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.493904   27912 request.go:632] Waited for 197.016726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493955   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gtw7d
	I1204 20:11:24.493960   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.493967   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.493971   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.497694   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.693075   27912 request.go:632] Waited for 194.571912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693130   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:24.693135   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.693142   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.693146   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.696302   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:24.696899   27912 pod_ready.go:93] pod "kube-proxy-gtw7d" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:24.696919   27912 pod_ready.go:82] duration metric: took 400.084608ms for pod "kube-proxy-gtw7d" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.696928   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:24.893931   27912 request.go:632] Waited for 196.931451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894022   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r4895
	I1204 20:11:24.894035   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:24.894043   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:24.894046   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:24.897046   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.093243   27912 request.go:632] Waited for 195.305694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093305   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:25.093310   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.093318   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.093321   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.096337   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:25.096835   27912 pod_ready.go:93] pod "kube-proxy-r4895" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.096854   27912 pod_ready.go:82] duration metric: took 399.920087ms for pod "kube-proxy-r4895" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.096864   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.294085   27912 request.go:632] Waited for 197.134763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294155   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlhfv
	I1204 20:11:25.294164   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.294174   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.294181   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.297688   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.493811   27912 request.go:632] Waited for 195.37479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493896   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.493902   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.493910   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.493914   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.497035   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.497776   27912 pod_ready.go:93] pod "kube-proxy-tlhfv" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.497796   27912 pod_ready.go:82] duration metric: took 400.925065ms for pod "kube-proxy-tlhfv" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.497810   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.693786   27912 request.go:632] Waited for 195.910848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693855   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930
	I1204 20:11:25.693860   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.693866   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.693870   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.697283   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.893336   27912 request.go:632] Waited for 195.363737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893392   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930
	I1204 20:11:25.893398   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:25.893407   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:25.893417   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:25.896883   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:25.897527   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:25.897547   27912 pod_ready.go:82] duration metric: took 399.728095ms for pod "kube-scheduler-ha-739930" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:25.897560   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.093716   27912 request.go:632] Waited for 196.07568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093770   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m02
	I1204 20:11:26.093775   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.093783   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.093787   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.097490   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:26.293677   27912 request.go:632] Waited for 195.380903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293724   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m02
	I1204 20:11:26.293729   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.293736   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.293740   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.296374   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.297059   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.297083   27912 pod_ready.go:82] duration metric: took 399.512498ms for pod "kube-scheduler-ha-739930-m02" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.297096   27912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.493619   27912 request.go:632] Waited for 196.449368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493679   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-739930-m03
	I1204 20:11:26.493687   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.493698   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.493708   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.496613   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.693570   27912 request.go:632] Waited for 196.314375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693652   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-739930-m03
	I1204 20:11:26.693664   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.693674   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.693683   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.696474   27912 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 20:11:26.697001   27912 pod_ready.go:93] pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 20:11:26.697020   27912 pod_ready.go:82] duration metric: took 399.916866ms for pod "kube-scheduler-ha-739930-m03" in "kube-system" namespace to be "Ready" ...
	I1204 20:11:26.697032   27912 pod_ready.go:39] duration metric: took 5.199290508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:11:26.697048   27912 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:11:26.697102   27912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:11:26.712884   27912 api_server.go:72] duration metric: took 23.535549754s to wait for apiserver process to appear ...
	I1204 20:11:26.712900   27912 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:11:26.712916   27912 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1204 20:11:26.717076   27912 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1204 20:11:26.717125   27912 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1204 20:11:26.717134   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.717141   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.717145   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.718054   27912 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 20:11:26.718141   27912 api_server.go:141] control plane version: v1.31.2
	I1204 20:11:26.718158   27912 api_server.go:131] duration metric: took 5.25178ms to wait for apiserver health ...
	I1204 20:11:26.718165   27912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:11:26.893379   27912 request.go:632] Waited for 175.13636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893453   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:26.893459   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:26.893466   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:26.893472   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:26.899023   27912 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 20:11:26.905500   27912 system_pods.go:59] 24 kube-system pods found
	I1204 20:11:26.905525   27912 system_pods.go:61] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:26.905530   27912 system_pods.go:61] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:26.905534   27912 system_pods.go:61] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:26.905538   27912 system_pods.go:61] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:26.905541   27912 system_pods.go:61] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:26.905545   27912 system_pods.go:61] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:26.905548   27912 system_pods.go:61] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:26.905550   27912 system_pods.go:61] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:26.905554   27912 system_pods.go:61] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:26.905558   27912 system_pods.go:61] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:26.905564   27912 system_pods.go:61] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:26.905569   27912 system_pods.go:61] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:26.905574   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:26.905579   27912 system_pods.go:61] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:26.905588   27912 system_pods.go:61] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:26.905593   27912 system_pods.go:61] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:26.905602   27912 system_pods.go:61] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:26.905607   27912 system_pods.go:61] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:26.905612   27912 system_pods.go:61] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:26.905619   27912 system_pods.go:61] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:26.905622   27912 system_pods.go:61] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:26.905626   27912 system_pods.go:61] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:26.905630   27912 system_pods.go:61] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:26.905634   27912 system_pods.go:61] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:26.905640   27912 system_pods.go:74] duration metric: took 187.469575ms to wait for pod list to return data ...
	I1204 20:11:26.905660   27912 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:11:27.093927   27912 request.go:632] Waited for 188.174644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093986   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1204 20:11:27.093991   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.093998   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.094011   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.097761   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.097902   27912 default_sa.go:45] found service account: "default"
	I1204 20:11:27.097922   27912 default_sa.go:55] duration metric: took 192.253848ms for default service account to be created ...
	I1204 20:11:27.097933   27912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:11:27.293645   27912 request.go:632] Waited for 195.638628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293720   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1204 20:11:27.293727   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.293736   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.293742   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.299871   27912 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 20:11:27.306654   27912 system_pods.go:86] 24 kube-system pods found
	I1204 20:11:27.306676   27912 system_pods.go:89] "coredns-7c65d6cfc9-7kbgr" [662019c2-29e8-4437-8b14-f9fbf1268d03] Running
	I1204 20:11:27.306682   27912 system_pods.go:89] "coredns-7c65d6cfc9-8kztf" [40363110-9dbd-47ae-8aec-70630543d005] Running
	I1204 20:11:27.306686   27912 system_pods.go:89] "etcd-ha-739930" [35305e9d-e464-498a-b2a7-6008dcaaf04c] Running
	I1204 20:11:27.306689   27912 system_pods.go:89] "etcd-ha-739930-m02" [b870f77d-f65a-4d00-b8da-27bf2f696d35] Running
	I1204 20:11:27.306692   27912 system_pods.go:89] "etcd-ha-739930-m03" [343495fb-dbd2-4eab-a236-40e2be521a17] Running
	I1204 20:11:27.306696   27912 system_pods.go:89] "kindnet-8wsgw" [d8bc54cd-d100-43fa-bda8-28ee9b58b947] Running
	I1204 20:11:27.306699   27912 system_pods.go:89] "kindnet-d2rvr" [7ab1c96e-13c6-40c3-affc-4a306e695a9b] Running
	I1204 20:11:27.306702   27912 system_pods.go:89] "kindnet-z6v65" [233b2af5-60f4-4f70-a63f-f7238cfbc55c] Running
	I1204 20:11:27.306705   27912 system_pods.go:89] "kube-apiserver-ha-739930" [d1943e08-b292-4551-bcc7-a14adc4ec336] Running
	I1204 20:11:27.306709   27912 system_pods.go:89] "kube-apiserver-ha-739930-m02" [b05a68fa-e419-43b6-ae14-08dd1635b446] Running
	I1204 20:11:27.306714   27912 system_pods.go:89] "kube-apiserver-ha-739930-m03" [eb40f9aa-f4a4-4222-b470-615e8f746fd2] Running
	I1204 20:11:27.306719   27912 system_pods.go:89] "kube-controller-manager-ha-739930" [3db9ec12-4c55-4a78-bef1-4f4cf8f38ae0] Running
	I1204 20:11:27.306724   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m02" [01426d54-9156-4288-b9ae-c639167795b4] Running
	I1204 20:11:27.306733   27912 system_pods.go:89] "kube-controller-manager-ha-739930-m03" [57d1436a-59aa-4883-b1a0-e3f823309e4e] Running
	I1204 20:11:27.306742   27912 system_pods.go:89] "kube-proxy-gtw7d" [4481a753-5064-41a6-8f2c-d4710b8ad7bb] Running
	I1204 20:11:27.306748   27912 system_pods.go:89] "kube-proxy-r4895" [565b2768-8e4b-4659-a178-a99d86163b7c] Running
	I1204 20:11:27.306756   27912 system_pods.go:89] "kube-proxy-tlhfv" [2f01e7f6-5af2-490b-8a2c-266e1701c102] Running
	I1204 20:11:27.306762   27912 system_pods.go:89] "kube-scheduler-ha-739930" [cc1e6978-7082-494a-afce-e754a35e9b76] Running
	I1204 20:11:27.306770   27912 system_pods.go:89] "kube-scheduler-ha-739930-m02" [cd7d0a65-99e9-4377-9088-f2d7d7165982] Running
	I1204 20:11:27.306774   27912 system_pods.go:89] "kube-scheduler-ha-739930-m03" [fbc3feca-5ce1-441e-b3e9-1c47930334da] Running
	I1204 20:11:27.306780   27912 system_pods.go:89] "kube-vip-ha-739930" [524e54ee-5407-44c3-a2e4-d029f7e6a003] Running
	I1204 20:11:27.306784   27912 system_pods.go:89] "kube-vip-ha-739930-m02" [77595bf0-7e49-4ead-98b0-e1cc5b8533d7] Running
	I1204 20:11:27.306787   27912 system_pods.go:89] "kube-vip-ha-739930-m03" [596bee4d-c0d5-499e-9e8f-f4b1322d83b3] Running
	I1204 20:11:27.306790   27912 system_pods.go:89] "storage-provisioner" [84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8] Running
	I1204 20:11:27.306796   27912 system_pods.go:126] duration metric: took 208.857473ms to wait for k8s-apps to be running ...
	I1204 20:11:27.306805   27912 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:11:27.306853   27912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:11:27.321782   27912 system_svc.go:56] duration metric: took 14.969542ms WaitForService to wait for kubelet
	I1204 20:11:27.321804   27912 kubeadm.go:582] duration metric: took 24.144472529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:11:27.321820   27912 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:11:27.493192   27912 request.go:632] Waited for 171.286703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493250   27912 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1204 20:11:27.493255   27912 round_trippers.go:469] Request Headers:
	I1204 20:11:27.493262   27912 round_trippers.go:473]     Accept: application/json, */*
	I1204 20:11:27.493266   27912 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 20:11:27.497192   27912 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 20:11:27.498227   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498244   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498254   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498259   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498262   27912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:11:27.498265   27912 node_conditions.go:123] node cpu capacity is 2
	I1204 20:11:27.498269   27912 node_conditions.go:105] duration metric: took 176.444491ms to run NodePressure ...
	I1204 20:11:27.498283   27912 start.go:241] waiting for startup goroutines ...
	I1204 20:11:27.498303   27912 start.go:255] writing updated cluster config ...
	I1204 20:11:27.498580   27912 ssh_runner.go:195] Run: rm -f paused
	I1204 20:11:27.549391   27912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 20:11:27.551427   27912 out.go:177] * Done! kubectl is now configured to use "ha-739930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.493666106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323493643030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17a90eb4-602c-48ad-8083-99c2be05b3a4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.494685619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2a3ae75-40a0-483c-a5cf-46474932031f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.494788370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2a3ae75-40a0-483c-a5cf-46474932031f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.495000692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2a3ae75-40a0-483c-a5cf-46474932031f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.534003587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e07e3168-cf82-4dee-9a45-9468f882370b name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.534125088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e07e3168-cf82-4dee-9a45-9468f882370b name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.535247426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd86c737-6976-4a2a-8384-58855ab0284e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.535921271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323535868881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd86c737-6976-4a2a-8384-58855ab0284e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.536410746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c298fc8-2d03-476d-abfa-a0af230dbc15 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.536573230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c298fc8-2d03-476d-abfa-a0af230dbc15 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.536864627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c298fc8-2d03-476d-abfa-a0af230dbc15 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.576359180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bce63f6-e095-4f86-b09b-29297489ab41 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.576456085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bce63f6-e095-4f86-b09b-29297489ab41 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.578088487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a15b1d3a-424f-4343-99a8-64f406957dee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.578855969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323578817113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a15b1d3a-424f-4343-99a8-64f406957dee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.580608519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed4570d4-0db8-4ecf-aa3f-5823e1d3b1f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.580700978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed4570d4-0db8-4ecf-aa3f-5823e1d3b1f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.581077558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed4570d4-0db8-4ecf-aa3f-5823e1d3b1f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.633459336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3aacb9c-3af3-4781-9e4d-63c12fb0f9b8 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.633580008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3aacb9c-3af3-4781-9e4d-63c12fb0f9b8 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.635331016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3db7eb0-aefc-4ab0-a816-307586a050b9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.635952135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323635915724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3db7eb0-aefc-4ab0-a816-307586a050b9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.636519623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1fb1960-5e79-4196-a205-7d6cccb43a5f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.636681251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1fb1960-5e79-4196-a205-7d6cccb43a5f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:15:23 ha-739930 crio[665]: time="2024-12-04 20:15:23.637355772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c09d55fbc3f943c790def9073b88f01609e4300451bae039e4cd073f0da97f61,PodSandboxId:8470389e19e5b28b50b8fccf3fc3911e02d6a5d228b7739b5d74827a2cda13ad,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733343092537450258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gg7dr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50,PodSandboxId:fdd28652924af40713f1cc9921837027bcf2d919bc8a45a3330e7b8e261100e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953924941655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7kbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662019c2-29e8-4437-8b14-f9fbf1268d03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac,PodSandboxId:a639b811aff3be3e7ee462400bb28276bcdce1f970dba591ef29cb5f8ecf55a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733342953880846280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8kztf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
40363110-9dbd-47ae-8aec-70630543d005,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1496ef67bc6f05f97f8da017d26b5ef402354fd4f5cad7354f86ed14b360b13,PodSandboxId:235aa20e54db74e6eee62b6273bd65f067e9293b34b00f86bebbdf24e92c8c12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733342953787213731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dfb457-b91f-4070-aa2a-9fbe4c6dd7c8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7,PodSandboxId:22f273a6fc170916ed294c18ea089fc5b6007ec66b51d45c95042ab6c43d6a4b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733342941935728144,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8wsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8bc54cd-d100-43fa-bda8-28ee9b58b947,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627,PodSandboxId:30611e2a6fdccf72efc978dd3ff57b8cb4927095bb0a8cf4b67cc4353243a252,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733342938
754739932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f01e7f6-5af2-490b-8a2c-266e1701c102,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b,PodSandboxId:5f8113a27db247d70444c34f598adb4d8920a3f17f8c7f529ee1503205295514,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173334292981
9119605,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85517d76879ff3f468d156333aefa2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7,PodSandboxId:a0e82c5e83a213c20a332613e67701ddb375a586927ef5e557431138c4f0f2aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733342927393948447,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b071552f9356e83d17c476e03918fe9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3,PodSandboxId:83caff9199eb85d80e88c4f8531ac1ec39b66e92e5f3b7f7cb7e960e35c4ea4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733342927337542360,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b5d213282d4e3d0b17f56770f58750,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246,PodSandboxId:91df0913316d5fe6318abd1b00af1f31ce79fcbd082873c64a4aede83b9b139c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733342927317490542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af968bcb5bb689c598a55bb96c345514,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4,PodSandboxId:bccd9e2c068724fdade2d27ef529f8e648d95a17f366b1c7fc771540b909a24c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733342927271139282,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-739930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b85df04725e54b66c583c1e4307b02b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1fb1960-5e79-4196-a205-7d6cccb43a5f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c09d55fbc3f94       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8470389e19e5b       busybox-7dff88458-gg7dr
	92f0436c068d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   fdd28652924af       coredns-7c65d6cfc9-7kbgr
	ab16b32e60a72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a639b811aff3b       coredns-7c65d6cfc9-8kztf
	a1496ef67bc6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   235aa20e54db7       storage-provisioner
	f38276fe657c7       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   22f273a6fc170       kindnet-8wsgw
	8643b775b5352       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   30611e2a6fdcc       kube-proxy-tlhfv
	b4a22468ef5bd       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   5f8113a27db24       kube-vip-ha-739930
	325ac1400e34a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a0e82c5e83a21       kube-scheduler-ha-739930
	1fdab5e7f0c11       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   83caff9199eb8       kube-apiserver-ha-739930
	52571ff875ebe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   91df0913316d5       etcd-ha-739930
	c2343748d9b3c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   bccd9e2c06872       kube-controller-manager-ha-739930
	
	
	==> coredns [92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50] <==
	[INFO] 10.244.1.2:60420 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000998s
	[INFO] 10.244.2.2:43602 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198643s
	[INFO] 10.244.2.2:55688 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004203463s
	[INFO] 10.244.2.2:58147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017975s
	[INFO] 10.244.0.4:34390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142716s
	[INFO] 10.244.0.4:33345 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126491s
	[INFO] 10.244.1.2:52771 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534902s
	[INFO] 10.244.1.2:50377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155393s
	[INFO] 10.244.1.2:57617 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204758s
	[INFO] 10.244.1.2:33315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087548s
	[INFO] 10.244.1.2:43721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138913s
	[INFO] 10.244.2.2:36167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128945s
	[INFO] 10.244.2.2:39846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141449s
	[INFO] 10.244.0.4:49972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079931s
	[INFO] 10.244.0.4:54249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163883s
	[INFO] 10.244.1.2:50096 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116516s
	[INFO] 10.244.1.2:45073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132387s
	[INFO] 10.244.2.2:49399 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153554s
	[INFO] 10.244.2.2:59645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000182375s
	[INFO] 10.244.0.4:58720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128913s
	[INFO] 10.244.0.4:43247 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014397s
	[INFO] 10.244.0.4:41555 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088414s
	[INFO] 10.244.0.4:43722 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065939s
	[INFO] 10.244.1.2:45770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102411s
	[INFO] 10.244.1.2:50474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112012s
	
	
	==> coredns [ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac] <==
	[INFO] 10.244.1.2:40314 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016375s
	[INFO] 10.244.2.2:49280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000323723s
	[INFO] 10.244.2.2:39711 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206446s
	[INFO] 10.244.2.2:58438 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003929293s
	[INFO] 10.244.2.2:51399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159908s
	[INFO] 10.244.2.2:39775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142713s
	[INFO] 10.244.0.4:59240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001795102s
	[INFO] 10.244.0.4:58038 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108734s
	[INFO] 10.244.0.4:54479 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222678s
	[INFO] 10.244.0.4:48445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001109511s
	[INFO] 10.244.0.4:56707 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120069s
	[INFO] 10.244.0.4:44194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:36003 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139108s
	[INFO] 10.244.1.2:48175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090843s
	[INFO] 10.244.1.2:54736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072028s
	[INFO] 10.244.2.2:41244 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110768s
	[INFO] 10.244.2.2:58717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088169s
	[INFO] 10.244.0.4:52576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161976s
	[INFO] 10.244.0.4:50935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010896s
	[INFO] 10.244.1.2:40433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160052s
	[INFO] 10.244.1.2:48574 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094093s
	[INFO] 10.244.2.2:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131379s
	[INFO] 10.244.2.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000289898s
	[INFO] 10.244.1.2:59160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148396s
	[INFO] 10.244.1.2:49691 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140675s
	
	
	==> describe nodes <==
	Name:               ha-739930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T20_08_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:08:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:11:56 +0000   Wed, 04 Dec 2024 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-739930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a862467bfb34c3ba59a1a6944c8e8ad
	  System UUID:                4a862467-bfb3-4c3b-a59a-1a6944c8e8ad
	  Boot ID:                    88a12a5a-b072-479a-8944-b6767cbdf4f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gg7dr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 coredns-7c65d6cfc9-7kbgr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-8kztf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-739930                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-8wsgw                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-739930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-739930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-tlhfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-739930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-739930                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m24s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-739930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-739930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-739930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-739930 status is now: NodeReady
	  Normal  RegisteredNode           5m30s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	  Normal  RegisteredNode           4m15s  node-controller  Node ha-739930 event: Registered Node ha-739930 in Controller
	
	
	Name:               ha-739930-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_09_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:09:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:12:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 20:11:48 +0000   Wed, 04 Dec 2024 20:13:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-739930-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 309500ff1508404f8337a542897e4a63
	  System UUID:                309500ff-1508-404f-8337-a542897e4a63
	  Boot ID:                    abc62bfe-1148-4265-a781-5ad8762ade09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kx56q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-739930-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-z6v65                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m37s
	  kube-system                 kube-apiserver-ha-739930-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-ha-739930-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-gtw7d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-scheduler-ha-739930-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-739930-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-739930-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-739930-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-739930-m02 event: Registered Node ha-739930-m02 in Controller
	  Normal  NodeNotReady             2m1s                   node-controller  Node ha-739930-m02 status is now: NodeNotReady
	
	
	Name:               ha-739930-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_11_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:11:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:01 +0000   Wed, 04 Dec 2024 20:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-739930-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eddf849e101457c8f603f9f7bb068e3
	  System UUID:                7eddf849-e101-457c-8f60-3f9f7bb068e3
	  Boot ID:                    94b82cc0-8208-45bb-85df-9fba3000dbef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9pz7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-739930-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kindnet-d2rvr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m23s
	  kube-system                 kube-apiserver-ha-739930-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-controller-manager-ha-739930-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-r4895                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-ha-739930-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-vip-ha-739930-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m24s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m24s)  kubelet          Node ha-739930-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m24s)  kubelet          Node ha-739930-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-739930-m03 event: Registered Node ha-739930-m03 in Controller
	
	
	Name:               ha-739930-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-739930-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=ha-739930
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T20_12_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:12:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-739930-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:12:35 +0000   Wed, 04 Dec 2024 20:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-739930-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caea6c34853a432f8606c2c81d5d7e80
	  System UUID:                caea6c34-853a-432f-8606-c2c81d5d7e80
	  Boot ID:                    64cbf16d-0924-4d4e-bb2e-e3fb57ad6cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2l856       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-2dnzj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m18s (x2 over 3m19s)  kubelet          Node ha-739930-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x2 over 3m19s)  kubelet          Node ha-739930-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x2 over 3m19s)  kubelet          Node ha-739930-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-739930-m04 event: Registered Node ha-739930-m04 in Controller
	  Normal  NodeReady                2m58s (x2 over 2m58s)  kubelet          Node ha-739930-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038376] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818831] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961468] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.569504] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.583210] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.060308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060487] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.188680] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114168] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.247975] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.760825] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.102978] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066053] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.507773] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085425] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.435723] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 4 20:09] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.420810] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246] <==
	{"level":"warn","ts":"2024-12-04T20:15:23.808951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.870723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.877529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.887060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.890475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.901244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.907014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.910876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.913907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.918219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.921266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.928400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.933552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.939260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.942351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.945101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.954230Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.959378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.964611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.967939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.970888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.973730Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.978941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:23.984431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T20:15:24.008651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"6dfff839a0574192","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:24 up 7 min,  0 users,  load average: 0.33, 0.26, 0.12
	Linux ha-739930 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7] <==
	I1204 20:14:52.877511       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:02.869044       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:02.869284       1 main.go:301] handling current node
	I1204 20:15:02.869336       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:02.869343       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:02.869633       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:02.869654       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:02.869898       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:02.869919       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:12.876661       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:12.876725       1 main.go:301] handling current node
	I1204 20:15:12.876787       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:12.876795       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:12.877118       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:12.877138       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	I1204 20:15:12.877303       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:12.877319       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:22.878960       1 main.go:297] Handling node with IPs: map[192.168.39.230:{}]
	I1204 20:15:22.879003       1 main.go:324] Node ha-739930-m04 has CIDR [10.244.3.0/24] 
	I1204 20:15:22.879299       1 main.go:297] Handling node with IPs: map[192.168.39.183:{}]
	I1204 20:15:22.879336       1 main.go:301] handling current node
	I1204 20:15:22.879355       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I1204 20:15:22.879379       1 main.go:324] Node ha-739930-m02 has CIDR [10.244.1.0/24] 
	I1204 20:15:22.879587       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1204 20:15:22.879616       1 main.go:324] Node ha-739930-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3] <==
	I1204 20:08:52.109573       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 20:08:52.115869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1204 20:08:52.116893       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 20:08:52.120949       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:08:52.319935       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 20:08:53.401361       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 20:08:53.418287       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 20:08:53.427159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 20:08:57.975080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 20:08:58.071170       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1204 20:11:33.595040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51898: use of closed network connection
	E1204 20:11:33.787246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51926: use of closed network connection
	E1204 20:11:33.961220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51944: use of closed network connection
	E1204 20:11:34.139353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51958: use of closed network connection
	E1204 20:11:34.492487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51978: use of closed network connection
	E1204 20:11:34.660669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51994: use of closed network connection
	E1204 20:11:34.825641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52014: use of closed network connection
	E1204 20:11:35.000850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52034: use of closed network connection
	E1204 20:11:35.295050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52074: use of closed network connection
	E1204 20:11:35.467188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52090: use of closed network connection
	E1204 20:11:35.632176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52096: use of closed network connection
	E1204 20:11:35.802340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52124: use of closed network connection
	E1204 20:11:35.976054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52130: use of closed network connection
	E1204 20:11:36.156331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52148: use of closed network connection
	W1204 20:13:02.138009       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.176 192.168.39.183]
	
	
	==> kube-controller-manager [c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4] <==
	I1204 20:12:05.098063       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-739930-m04" podCIDRs=["10.244.3.0/24"]
	I1204 20:12:05.098353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.099501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.129202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.212844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:05.605704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:07.219432       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-739930-m04"
	I1204 20:12:07.250173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:08.816441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.034862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.114294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:09.193601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:15.131792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.187809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:12:25.187897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:25.200602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:27.234376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:12:35.291257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m04"
	I1204 20:13:22.261174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-739930-m04"
	I1204 20:13:22.262013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.294239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:22.349815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.422518ms"
	I1204 20:13:22.353121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.184µs"
	I1204 20:13:23.918547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	I1204 20:13:27.468391       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-739930-m02"
	
	
	==> kube-proxy [8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:08:59.055359       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:08:59.074919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1204 20:08:59.075054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:08:59.106971       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:08:59.107053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:08:59.107091       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:08:59.110117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:08:59.110853       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:08:59.110911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:08:59.113929       1 config.go:328] "Starting node config controller"
	I1204 20:08:59.113988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:08:59.114597       1 config.go:199] "Starting service config controller"
	I1204 20:08:59.114621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:08:59.114931       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:08:59.114959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:08:59.214563       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:08:59.215004       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:08:59.216196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7] <==
	E1204 20:08:51.687075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.698835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 20:08:51.698950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.756911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 20:08:51.757061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 20:08:51.761020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:08:51.761159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 20:08:54.377656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:11:28.468555       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e79c51d4-80e5-490b-906e-e376195d820e" pod="default/busybox-7dff88458-4zmkp" assumedNode="ha-739930-m02" currentNode="ha-739930-m03"
	E1204 20:11:28.510519       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m03"
	E1204 20:11:28.510990       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e79c51d4-80e5-490b-906e-e376195d820e(default/busybox-7dff88458-4zmkp) was assumed on ha-739930-m03 but assigned to ha-739930-m02" pod="default/busybox-7dff88458-4zmkp"
	E1204 20:11:28.511176       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4zmkp\": pod busybox-7dff88458-4zmkp is already assigned to node \"ha-739930-m02\"" pod="default/busybox-7dff88458-4zmkp"
	I1204 20:11:28.511316       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4zmkp" node="ha-739930-m02"
	I1204 20:11:28.544933       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="5411c4b8-6cb8-493d-8ce1-adcf557c68bc" pod="default/busybox-7dff88458-b94b5" assumedNode="ha-739930" currentNode="ha-739930-m03"
	E1204 20:11:28.557489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-b94b5" node="ha-739930-m03"
	E1204 20:11:28.557560       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5411c4b8-6cb8-493d-8ce1-adcf557c68bc(default/busybox-7dff88458-b94b5) was assumed on ha-739930-m03 but assigned to ha-739930" pod="default/busybox-7dff88458-b94b5"
	E1204 20:11:28.557587       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-b94b5\": pod busybox-7dff88458-b94b5 is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-b94b5"
	I1204 20:11:28.557614       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-b94b5" node="ha-739930"
	E1204 20:11:30.014314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:11:30.014481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a1f1ba1f-1720-4b97-a4a1-ab2d0c4cfaa5(default/busybox-7dff88458-gg7dr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gg7dr"
	E1204 20:11:30.015337       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gg7dr\": pod busybox-7dff88458-gg7dr is already assigned to node \"ha-739930\"" pod="default/busybox-7dff88458-gg7dr"
	I1204 20:11:30.015401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gg7dr" node="ha-739930"
	E1204 20:12:05.139969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	E1204 20:12:05.140096       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kswc6\": pod kindnet-kswc6 is already assigned to node \"ha-739930-m04\"" pod="kube-system/kindnet-kswc6"
	I1204 20:12:05.140125       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kswc6" node="ha-739930-m04"
	
	
	==> kubelet <==
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462332    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:13:53 ha-739930 kubelet[1305]: E1204 20:13:53.462375    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343233462001754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465094    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:03 ha-739930 kubelet[1305]: E1204 20:14:03.465133    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343243464625528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.466702    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:13 ha-739930 kubelet[1305]: E1204 20:14:13.467091    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343253466412207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469001    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:23 ha-739930 kubelet[1305]: E1204 20:14:23.469280    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343263468683209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471311    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:33 ha-739930 kubelet[1305]: E1204 20:14:33.471582    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343273470919351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.473913    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:43 ha-739930 kubelet[1305]: E1204 20:14:43.474005    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343283473338293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.358128    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:14:53 ha-739930 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:14:53 ha-739930 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476132    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:14:53 ha-739930 kubelet[1305]: E1204 20:14:53.476169    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343293475734296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.477995    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:03 ha-739930 kubelet[1305]: E1204 20:15:03.478354    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343303477421901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:13 ha-739930 kubelet[1305]: E1204 20:15:13.481441    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343313479636396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:13 ha-739930 kubelet[1305]: E1204 20:15:13.481510    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343313479636396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:23 ha-739930 kubelet[1305]: E1204 20:15:23.483414    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323483024120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 20:15:23 ha-739930 kubelet[1305]: E1204 20:15:23.483445    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733343323483024120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-739930 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-739930 -v=7 --alsologtostderr
E1204 20:17:26.275322   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-739930 -v=7 --alsologtostderr: exit status 82 (2m1.815612816s)

                                                
                                                
-- stdout --
	* Stopping node "ha-739930-m04"  ...
	* Stopping node "ha-739930-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:15:25.022085   33179 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:15:25.022197   33179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:15:25.022206   33179 out.go:358] Setting ErrFile to fd 2...
	I1204 20:15:25.022210   33179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:15:25.022367   33179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:15:25.022577   33179 out.go:352] Setting JSON to false
	I1204 20:15:25.022666   33179 mustload.go:65] Loading cluster: ha-739930
	I1204 20:15:25.023055   33179 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:15:25.023139   33179 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:15:25.023314   33179 mustload.go:65] Loading cluster: ha-739930
	I1204 20:15:25.023475   33179 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:15:25.023508   33179 stop.go:39] StopHost: ha-739930-m04
	I1204 20:15:25.023903   33179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:15:25.023971   33179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:15:25.038685   33179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
	I1204 20:15:25.039132   33179 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:15:25.039733   33179 main.go:141] libmachine: Using API Version  1
	I1204 20:15:25.039758   33179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:15:25.040137   33179 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:15:25.042546   33179 out.go:177] * Stopping node "ha-739930-m04"  ...
	I1204 20:15:25.044061   33179 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 20:15:25.044099   33179 main.go:141] libmachine: (ha-739930-m04) Calling .DriverName
	I1204 20:15:25.044307   33179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 20:15:25.044338   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHHostname
	I1204 20:15:25.047246   33179 main.go:141] libmachine: (ha-739930-m04) DBG | domain ha-739930-m04 has defined MAC address 52:54:00:18:4f:99 in network mk-ha-739930
	I1204 20:15:25.047635   33179 main.go:141] libmachine: (ha-739930-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:4f:99", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:11:51 +0000 UTC Type:0 Mac:52:54:00:18:4f:99 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-739930-m04 Clientid:01:52:54:00:18:4f:99}
	I1204 20:15:25.047665   33179 main.go:141] libmachine: (ha-739930-m04) DBG | domain ha-739930-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:18:4f:99 in network mk-ha-739930
	I1204 20:15:25.047795   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHPort
	I1204 20:15:25.047951   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHKeyPath
	I1204 20:15:25.048078   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHUsername
	I1204 20:15:25.048257   33179 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m04/id_rsa Username:docker}
	I1204 20:15:25.132052   33179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 20:15:25.185032   33179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 20:15:25.237697   33179 main.go:141] libmachine: Stopping "ha-739930-m04"...
	I1204 20:15:25.237740   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetState
	I1204 20:15:25.239292   33179 main.go:141] libmachine: (ha-739930-m04) Calling .Stop
	I1204 20:15:25.242932   33179 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 0/120
	I1204 20:15:26.366565   33179 main.go:141] libmachine: (ha-739930-m04) Calling .GetState
	I1204 20:15:26.367894   33179 main.go:141] libmachine: Machine "ha-739930-m04" was stopped.
	I1204 20:15:26.367912   33179 stop.go:75] duration metric: took 1.323858535s to stop
	I1204 20:15:26.367935   33179 stop.go:39] StopHost: ha-739930-m03
	I1204 20:15:26.368224   33179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:15:26.368275   33179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:15:26.383350   33179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I1204 20:15:26.383844   33179 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:15:26.384396   33179 main.go:141] libmachine: Using API Version  1
	I1204 20:15:26.384418   33179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:15:26.384780   33179 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:15:26.386726   33179 out.go:177] * Stopping node "ha-739930-m03"  ...
	I1204 20:15:26.387898   33179 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 20:15:26.387924   33179 main.go:141] libmachine: (ha-739930-m03) Calling .DriverName
	I1204 20:15:26.388153   33179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 20:15:26.388182   33179 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHHostname
	I1204 20:15:26.391048   33179 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:15:26.391638   33179 main.go:141] libmachine: (ha-739930-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:55:42", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:10:26 +0000 UTC Type:0 Mac:52:54:00:8f:55:42 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-739930-m03 Clientid:01:52:54:00:8f:55:42}
	I1204 20:15:26.391666   33179 main.go:141] libmachine: (ha-739930-m03) DBG | domain ha-739930-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:8f:55:42 in network mk-ha-739930
	I1204 20:15:26.391746   33179 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHPort
	I1204 20:15:26.391893   33179 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHKeyPath
	I1204 20:15:26.392062   33179 main.go:141] libmachine: (ha-739930-m03) Calling .GetSSHUsername
	I1204 20:15:26.392202   33179 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m03/id_rsa Username:docker}
	I1204 20:15:26.480533   33179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 20:15:26.534758   33179 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 20:15:26.590633   33179 main.go:141] libmachine: Stopping "ha-739930-m03"...
	I1204 20:15:26.590655   33179 main.go:141] libmachine: (ha-739930-m03) Calling .GetState
	I1204 20:15:26.592094   33179 main.go:141] libmachine: (ha-739930-m03) Calling .Stop
	I1204 20:15:26.595392   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 0/120
	I1204 20:15:27.596790   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 1/120
	I1204 20:15:28.598151   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 2/120
	I1204 20:15:29.599703   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 3/120
	I1204 20:15:30.601142   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 4/120
	I1204 20:15:31.603061   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 5/120
	I1204 20:15:32.605127   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 6/120
	I1204 20:15:33.606356   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 7/120
	I1204 20:15:34.607986   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 8/120
	I1204 20:15:35.609450   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 9/120
	I1204 20:15:36.611639   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 10/120
	I1204 20:15:37.613072   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 11/120
	I1204 20:15:38.614350   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 12/120
	I1204 20:15:39.615999   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 13/120
	I1204 20:15:40.617372   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 14/120
	I1204 20:15:41.619334   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 15/120
	I1204 20:15:42.621214   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 16/120
	I1204 20:15:43.622643   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 17/120
	I1204 20:15:44.623977   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 18/120
	I1204 20:15:45.625414   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 19/120
	I1204 20:15:46.627593   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 20/120
	I1204 20:15:47.629388   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 21/120
	I1204 20:15:48.631817   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 22/120
	I1204 20:15:49.634254   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 23/120
	I1204 20:15:50.635798   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 24/120
	I1204 20:15:51.637221   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 25/120
	I1204 20:15:52.638839   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 26/120
	I1204 20:15:53.640933   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 27/120
	I1204 20:15:54.642497   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 28/120
	I1204 20:15:55.644231   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 29/120
	I1204 20:15:56.646241   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 30/120
	I1204 20:15:57.647669   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 31/120
	I1204 20:15:58.649076   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 32/120
	I1204 20:15:59.650306   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 33/120
	I1204 20:16:00.651585   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 34/120
	I1204 20:16:01.653358   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 35/120
	I1204 20:16:02.654936   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 36/120
	I1204 20:16:03.656313   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 37/120
	I1204 20:16:04.657766   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 38/120
	I1204 20:16:05.659258   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 39/120
	I1204 20:16:06.661137   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 40/120
	I1204 20:16:07.662972   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 41/120
	I1204 20:16:08.664375   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 42/120
	I1204 20:16:09.665839   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 43/120
	I1204 20:16:10.667265   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 44/120
	I1204 20:16:11.669457   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 45/120
	I1204 20:16:12.670712   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 46/120
	I1204 20:16:13.672275   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 47/120
	I1204 20:16:14.673594   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 48/120
	I1204 20:16:15.675130   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 49/120
	I1204 20:16:16.676676   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 50/120
	I1204 20:16:17.678253   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 51/120
	I1204 20:16:18.679688   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 52/120
	I1204 20:16:19.681049   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 53/120
	I1204 20:16:20.682331   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 54/120
	I1204 20:16:21.684143   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 55/120
	I1204 20:16:22.685423   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 56/120
	I1204 20:16:23.686837   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 57/120
	I1204 20:16:24.688233   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 58/120
	I1204 20:16:25.689794   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 59/120
	I1204 20:16:26.691363   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 60/120
	I1204 20:16:27.693632   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 61/120
	I1204 20:16:28.695070   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 62/120
	I1204 20:16:29.696690   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 63/120
	I1204 20:16:30.698128   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 64/120
	I1204 20:16:31.699799   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 65/120
	I1204 20:16:32.701000   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 66/120
	I1204 20:16:33.702420   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 67/120
	I1204 20:16:34.703835   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 68/120
	I1204 20:16:35.705373   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 69/120
	I1204 20:16:36.707117   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 70/120
	I1204 20:16:37.708442   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 71/120
	I1204 20:16:38.709787   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 72/120
	I1204 20:16:39.711144   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 73/120
	I1204 20:16:40.712394   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 74/120
	I1204 20:16:41.713964   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 75/120
	I1204 20:16:42.715271   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 76/120
	I1204 20:16:43.716443   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 77/120
	I1204 20:16:44.717823   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 78/120
	I1204 20:16:45.718968   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 79/120
	I1204 20:16:46.720568   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 80/120
	I1204 20:16:47.722099   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 81/120
	I1204 20:16:48.723342   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 82/120
	I1204 20:16:49.724907   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 83/120
	I1204 20:16:50.726237   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 84/120
	I1204 20:16:51.727951   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 85/120
	I1204 20:16:52.729564   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 86/120
	I1204 20:16:53.731363   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 87/120
	I1204 20:16:54.732665   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 88/120
	I1204 20:16:55.733939   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 89/120
	I1204 20:16:56.735554   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 90/120
	I1204 20:16:57.737874   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 91/120
	I1204 20:16:58.739337   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 92/120
	I1204 20:16:59.740807   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 93/120
	I1204 20:17:00.742150   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 94/120
	I1204 20:17:01.743939   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 95/120
	I1204 20:17:02.745431   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 96/120
	I1204 20:17:03.746837   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 97/120
	I1204 20:17:04.748585   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 98/120
	I1204 20:17:05.749971   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 99/120
	I1204 20:17:06.751839   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 100/120
	I1204 20:17:07.753146   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 101/120
	I1204 20:17:08.754387   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 102/120
	I1204 20:17:09.755764   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 103/120
	I1204 20:17:10.757874   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 104/120
	I1204 20:17:11.759963   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 105/120
	I1204 20:17:12.761962   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 106/120
	I1204 20:17:13.763491   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 107/120
	I1204 20:17:14.764890   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 108/120
	I1204 20:17:15.766230   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 109/120
	I1204 20:17:16.768609   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 110/120
	I1204 20:17:17.770009   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 111/120
	I1204 20:17:18.771403   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 112/120
	I1204 20:17:19.772653   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 113/120
	I1204 20:17:20.774095   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 114/120
	I1204 20:17:21.775769   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 115/120
	I1204 20:17:22.777770   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 116/120
	I1204 20:17:23.779051   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 117/120
	I1204 20:17:24.781024   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 118/120
	I1204 20:17:25.782261   33179 main.go:141] libmachine: (ha-739930-m03) Waiting for machine to stop 119/120
	I1204 20:17:26.783263   33179 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 20:17:26.783324   33179 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1204 20:17:26.785524   33179 out.go:201] 
	W1204 20:17:26.787013   33179 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1204 20:17:26.787031   33179 out.go:270] * 
	* 
	W1204 20:17:26.789263   33179 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 20:17:26.790535   33179 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-739930 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-739930 --wait=true -v=7 --alsologtostderr
E1204 20:17:53.977376   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:19:52.902349   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:21:15.968696   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-739930 --wait=true -v=7 --alsologtostderr: (4m49.777138857s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-739930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.98159985s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-739930 node start m02 -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-739930 -v=7                                                           | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-739930 -v=7                                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-739930 --wait=true -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:17 UTC | 04 Dec 24 20:22 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-739930                                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:22 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:17:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:17:26.838279   33690 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:17:26.838931   33690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:17:26.838988   33690 out.go:358] Setting ErrFile to fd 2...
	I1204 20:17:26.839004   33690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:17:26.839465   33690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:17:26.840353   33690 out.go:352] Setting JSON to false
	I1204 20:17:26.841245   33690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3597,"bootTime":1733339850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:17:26.841346   33690 start.go:139] virtualization: kvm guest
	I1204 20:17:26.843459   33690 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:17:26.844805   33690 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:17:26.844837   33690 notify.go:220] Checking for updates...
	I1204 20:17:26.847501   33690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:17:26.848735   33690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:17:26.849936   33690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:17:26.851096   33690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:17:26.852352   33690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:17:26.853960   33690 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:17:26.854090   33690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:17:26.854533   33690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:17:26.854572   33690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:17:26.870144   33690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I1204 20:17:26.870690   33690 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:17:26.871185   33690 main.go:141] libmachine: Using API Version  1
	I1204 20:17:26.871205   33690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:17:26.871601   33690 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:17:26.871787   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.910512   33690 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:17:26.911723   33690 start.go:297] selected driver: kvm2
	I1204 20:17:26.911741   33690 start.go:901] validating driver "kvm2" against &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:17:26.911913   33690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:17:26.912253   33690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:17:26.912346   33690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:17:26.927774   33690 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:17:26.928463   33690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:17:26.928502   33690 cni.go:84] Creating CNI manager for ""
	I1204 20:17:26.928567   33690 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 20:17:26.928634   33690 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:17:26.928785   33690 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:17:26.930599   33690 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:17:26.931965   33690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:17:26.932004   33690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:17:26.932017   33690 cache.go:56] Caching tarball of preloaded images
	I1204 20:17:26.932091   33690 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:17:26.932104   33690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:17:26.932240   33690 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:17:26.932483   33690 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:17:26.932539   33690 start.go:364] duration metric: took 33.838µs to acquireMachinesLock for "ha-739930"
	I1204 20:17:26.932560   33690 start.go:96] Skipping create...Using existing machine configuration
	I1204 20:17:26.932571   33690 fix.go:54] fixHost starting: 
	I1204 20:17:26.932846   33690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:17:26.932889   33690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:17:26.947520   33690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I1204 20:17:26.947987   33690 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:17:26.948497   33690 main.go:141] libmachine: Using API Version  1
	I1204 20:17:26.948523   33690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:17:26.948844   33690 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:17:26.949027   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.949177   33690 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:17:26.950657   33690 fix.go:112] recreateIfNeeded on ha-739930: state=Running err=<nil>
	W1204 20:17:26.950678   33690 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 20:17:26.953503   33690 out.go:177] * Updating the running kvm2 "ha-739930" VM ...
	I1204 20:17:26.954803   33690 machine.go:93] provisionDockerMachine start ...
	I1204 20:17:26.954821   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.955003   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:26.957380   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:26.957824   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:26.957852   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:26.958007   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:26.958169   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:26.958312   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:26.958422   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:26.958565   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:26.958743   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:26.958753   33690 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 20:17:27.072207   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:17:27.072234   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.072451   33690 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:17:27.072478   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.072666   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.075289   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.075652   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.075687   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.075776   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.075932   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.076077   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.076194   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.076328   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.076540   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.076556   33690 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:17:27.195565   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:17:27.195593   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.198080   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.198418   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.198446   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.198605   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.198772   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.198904   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.199001   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.199201   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.199429   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.199463   33690 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:17:27.304283   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:17:27.304321   33690 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:17:27.304338   33690 buildroot.go:174] setting up certificates
	I1204 20:17:27.304348   33690 provision.go:84] configureAuth start
	I1204 20:17:27.304356   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.304635   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:17:27.307125   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.307504   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.307524   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.307694   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.309672   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.310015   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.310043   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.310174   33690 provision.go:143] copyHostCerts
	I1204 20:17:27.310214   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:17:27.310288   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:17:27.310306   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:17:27.310375   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:17:27.310475   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:17:27.310495   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:17:27.310504   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:17:27.310533   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:17:27.310588   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:17:27.310605   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:17:27.310609   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:17:27.310629   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:17:27.310686   33690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:17:27.713774   33690 provision.go:177] copyRemoteCerts
	I1204 20:17:27.713835   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:17:27.713863   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.716767   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.717125   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.717155   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.717343   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.717522   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.717651   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.717749   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:17:27.797393   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:17:27.797470   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 20:17:27.823088   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:17:27.823168   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:17:27.847270   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:17:27.847335   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:17:27.872086   33690 provision.go:87] duration metric: took 567.718604ms to configureAuth
	I1204 20:17:27.872126   33690 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:17:27.872387   33690 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:17:27.872463   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.875020   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.875393   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.875434   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.875646   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.875811   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.875949   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.876061   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.876202   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.876361   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.876376   33690 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:18:58.735317   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:18:58.735346   33690 machine.go:96] duration metric: took 1m31.780530614s to provisionDockerMachine
	I1204 20:18:58.735361   33690 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:18:58.735389   33690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:18:58.735414   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:58.735798   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:18:58.735833   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.740069   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.740625   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.740645   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.740864   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.741083   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.741240   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.741382   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:58.822226   33690 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:18:58.826251   33690 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:18:58.826282   33690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:18:58.826380   33690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:18:58.826474   33690 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:18:58.826487   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:18:58.826600   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:18:58.835608   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:18:58.857928   33690 start.go:296] duration metric: took 122.554322ms for postStartSetup
	I1204 20:18:58.857977   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:58.858274   33690 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 20:18:58.858305   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.860967   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.861380   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.861402   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.861584   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.861750   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.861890   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.862009   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	W1204 20:18:58.941150   33690 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1204 20:18:58.941184   33690 fix.go:56] duration metric: took 1m32.008614445s for fixHost
	I1204 20:18:58.941206   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.943969   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.944337   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.944358   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.944476   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.944675   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.944848   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.944954   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.945112   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:18:58.945270   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:18:58.945280   33690 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:18:59.047822   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343539.018566341
	
	I1204 20:18:59.047853   33690 fix.go:216] guest clock: 1733343539.018566341
	I1204 20:18:59.047865   33690 fix.go:229] Guest: 2024-12-04 20:18:59.018566341 +0000 UTC Remote: 2024-12-04 20:18:58.941192084 +0000 UTC m=+92.140065350 (delta=77.374257ms)
	I1204 20:18:59.047907   33690 fix.go:200] guest clock delta is within tolerance: 77.374257ms
	I1204 20:18:59.047928   33690 start.go:83] releasing machines lock for "ha-739930", held for 1m32.115372791s
	I1204 20:18:59.047961   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.048234   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:18:59.050666   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.051020   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.051048   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.051147   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051752   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051889   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051949   33690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:18:59.051996   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:59.052062   33690 ssh_runner.go:195] Run: cat /version.json
	I1204 20:18:59.052083   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:59.054644   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.054781   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055014   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.055034   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055156   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:59.055308   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.055335   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055336   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:59.055501   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:59.055502   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:59.055682   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:59.055683   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:59.055809   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:59.055920   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:59.152009   33690 ssh_runner.go:195] Run: systemctl --version
	I1204 20:18:59.157898   33690 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:18:59.318190   33690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:18:59.323664   33690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:18:59.323739   33690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:18:59.332384   33690 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 20:18:59.332412   33690 start.go:495] detecting cgroup driver to use...
	I1204 20:18:59.332483   33690 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:18:59.347821   33690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:18:59.361658   33690 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:18:59.361716   33690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:18:59.374964   33690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:18:59.388212   33690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:18:59.541439   33690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:18:59.687720   33690 docker.go:233] disabling docker service ...
	I1204 20:18:59.687795   33690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:18:59.703016   33690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:18:59.715634   33690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:18:59.853659   33690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:18:59.992167   33690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:19:00.005217   33690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:19:00.021900   33690 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:19:00.021946   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.031205   33690 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:19:00.031245   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.040226   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.049236   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.058143   33690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:19:00.067216   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.076313   33690 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.085824   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.094659   33690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:19:00.102761   33690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:19:00.110630   33690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:19:00.244771   33690 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:19:03.773378   33690 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.528570922s)
	I1204 20:19:03.773413   33690 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:19:03.773466   33690 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:19:03.778736   33690 start.go:563] Will wait 60s for crictl version
	I1204 20:19:03.778791   33690 ssh_runner.go:195] Run: which crictl
	I1204 20:19:03.782248   33690 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:19:03.820809   33690 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:19:03.820895   33690 ssh_runner.go:195] Run: crio --version
	I1204 20:19:03.848968   33690 ssh_runner.go:195] Run: crio --version
	I1204 20:19:03.878052   33690 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:19:03.879120   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:19:03.881626   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:19:03.881973   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:19:03.881995   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:19:03.882219   33690 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:19:03.886435   33690 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:19:03.886565   33690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:19:03.886605   33690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:19:03.928761   33690 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:19:03.928783   33690 crio.go:433] Images already preloaded, skipping extraction
	I1204 20:19:03.928831   33690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:19:03.968183   33690 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:19:03.968206   33690 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:19:03.968216   33690 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:19:03.968339   33690 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:19:03.968425   33690 ssh_runner.go:195] Run: crio config
	I1204 20:19:04.021762   33690 cni.go:84] Creating CNI manager for ""
	I1204 20:19:04.021784   33690 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 20:19:04.021793   33690 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:19:04.021820   33690 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:19:04.021939   33690 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:19:04.021969   33690 kube-vip.go:115] generating kube-vip config ...
	I1204 20:19:04.022032   33690 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:19:04.035593   33690 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:19:04.035699   33690 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:19:04.035753   33690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:19:04.046759   33690 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:19:04.046816   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:19:04.057506   33690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:19:04.075674   33690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:19:04.093074   33690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:19:04.110443   33690 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:19:04.131447   33690 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:19:04.135293   33690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:19:04.290872   33690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:19:04.305586   33690 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:19:04.305604   33690 certs.go:194] generating shared ca certs ...
	I1204 20:19:04.305620   33690 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.305751   33690 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:19:04.305786   33690 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:19:04.305795   33690 certs.go:256] generating profile certs ...
	I1204 20:19:04.305877   33690 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:19:04.305902   33690 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09
	I1204 20:19:04.305918   33690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:19:04.451041   33690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 ...
	I1204 20:19:04.451068   33690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09: {Name:mk16e7be0ff2316006f5ebbb4fe1bebbdfd2402c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.451224   33690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09 ...
	I1204 20:19:04.451238   33690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09: {Name:mk85caecf29bd297713ac4620137696fa5929bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.451308   33690 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:19:04.451479   33690 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:19:04.451607   33690 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:19:04.451621   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:19:04.451633   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:19:04.451644   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:19:04.451657   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:19:04.451668   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:19:04.451683   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:19:04.451697   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:19:04.451709   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:19:04.451759   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:19:04.451785   33690 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:19:04.451794   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:19:04.451815   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:19:04.451837   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:19:04.451861   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:19:04.451896   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:19:04.451921   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.451934   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.451946   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.452494   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:19:04.476410   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:19:04.498566   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:19:04.520191   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:19:04.542641   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 20:19:04.564741   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:19:04.616981   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:19:04.639840   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:19:04.662595   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:19:04.685102   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:19:04.707291   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:19:04.729490   33690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:19:04.745513   33690 ssh_runner.go:195] Run: openssl version
	I1204 20:19:04.751213   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:19:04.761527   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.765703   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.765755   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.771576   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:19:04.780734   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:19:04.790805   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.795384   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.795437   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.801060   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:19:04.809818   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:19:04.820437   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.824493   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.824533   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.829644   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:19:04.838786   33690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:19:04.842832   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 20:19:04.848009   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 20:19:04.853152   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 20:19:04.858321   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 20:19:04.863607   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 20:19:04.868914   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 20:19:04.873969   33690 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:19:04.874087   33690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:19:04.874143   33690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:19:04.915148   33690 cri.go:89] found id: "8359c26dd609bc59a903acfcddc73ca648dfaa665c5be16b606a76f7eefa3509"
	I1204 20:19:04.915174   33690 cri.go:89] found id: "6f73139f266be25d4cb4cf40538e3c43757e16842e72c5ebbe88906ba06d569f"
	I1204 20:19:04.915187   33690 cri.go:89] found id: "3c0fda9cb8d0dc2ac3cb2369bf98d2e80fe40f802999d0e439cbe45df7ca065e"
	I1204 20:19:04.915190   33690 cri.go:89] found id: "6f5406ed09eb5158af76bfc4e907abf990cc69eb07b3420c5fb8417585aeb593"
	I1204 20:19:04.915194   33690 cri.go:89] found id: "92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50"
	I1204 20:19:04.915197   33690 cri.go:89] found id: "ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac"
	I1204 20:19:04.915200   33690 cri.go:89] found id: "f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7"
	I1204 20:19:04.915202   33690 cri.go:89] found id: "8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627"
	I1204 20:19:04.915205   33690 cri.go:89] found id: "b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b"
	I1204 20:19:04.915210   33690 cri.go:89] found id: "325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7"
	I1204 20:19:04.915216   33690 cri.go:89] found id: "1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3"
	I1204 20:19:04.915218   33690 cri.go:89] found id: "52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246"
	I1204 20:19:04.915221   33690 cri.go:89] found id: "c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4"
	I1204 20:19:04.915224   33690 cri.go:89] found id: ""
	I1204 20:19:04.915261   33690 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-739930 stop -v=7 --alsologtostderr: exit status 82 (2m0.472277547s)

                                                
                                                
-- stdout --
	* Stopping node "ha-739930-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:22:36.356157   36016 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:22:36.356292   36016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:22:36.356305   36016 out.go:358] Setting ErrFile to fd 2...
	I1204 20:22:36.356311   36016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:22:36.356523   36016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:22:36.356734   36016 out.go:352] Setting JSON to false
	I1204 20:22:36.356834   36016 mustload.go:65] Loading cluster: ha-739930
	I1204 20:22:36.357239   36016 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:22:36.357331   36016 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:22:36.357518   36016 mustload.go:65] Loading cluster: ha-739930
	I1204 20:22:36.357644   36016 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:22:36.357667   36016 stop.go:39] StopHost: ha-739930-m04
	I1204 20:22:36.358012   36016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:22:36.358054   36016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:22:36.373221   36016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1204 20:22:36.373745   36016 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:22:36.374299   36016 main.go:141] libmachine: Using API Version  1
	I1204 20:22:36.374322   36016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:22:36.374619   36016 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:22:36.376828   36016 out.go:177] * Stopping node "ha-739930-m04"  ...
	I1204 20:22:36.378082   36016 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 20:22:36.378113   36016 main.go:141] libmachine: (ha-739930-m04) Calling .DriverName
	I1204 20:22:36.378316   36016 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 20:22:36.378343   36016 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHHostname
	I1204 20:22:36.381139   36016 main.go:141] libmachine: (ha-739930-m04) DBG | domain ha-739930-m04 has defined MAC address 52:54:00:18:4f:99 in network mk-ha-739930
	I1204 20:22:36.381535   36016 main.go:141] libmachine: (ha-739930-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:4f:99", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:22:04 +0000 UTC Type:0 Mac:52:54:00:18:4f:99 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-739930-m04 Clientid:01:52:54:00:18:4f:99}
	I1204 20:22:36.381577   36016 main.go:141] libmachine: (ha-739930-m04) DBG | domain ha-739930-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:18:4f:99 in network mk-ha-739930
	I1204 20:22:36.381728   36016 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHPort
	I1204 20:22:36.381869   36016 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHKeyPath
	I1204 20:22:36.382042   36016 main.go:141] libmachine: (ha-739930-m04) Calling .GetSSHUsername
	I1204 20:22:36.382163   36016 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930-m04/id_rsa Username:docker}
	I1204 20:22:36.466189   36016 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 20:22:36.518064   36016 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 20:22:36.570439   36016 main.go:141] libmachine: Stopping "ha-739930-m04"...
	I1204 20:22:36.570468   36016 main.go:141] libmachine: (ha-739930-m04) Calling .GetState
	I1204 20:22:36.571970   36016 main.go:141] libmachine: (ha-739930-m04) Calling .Stop
	I1204 20:22:36.575584   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 0/120
	I1204 20:22:37.577036   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 1/120
	I1204 20:22:38.578928   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 2/120
	I1204 20:22:39.580425   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 3/120
	I1204 20:22:40.581870   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 4/120
	I1204 20:22:41.583617   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 5/120
	I1204 20:22:42.586002   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 6/120
	I1204 20:22:43.587348   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 7/120
	I1204 20:22:44.588642   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 8/120
	I1204 20:22:45.590237   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 9/120
	I1204 20:22:46.592232   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 10/120
	I1204 20:22:47.593862   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 11/120
	I1204 20:22:48.595288   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 12/120
	I1204 20:22:49.596667   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 13/120
	I1204 20:22:50.597990   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 14/120
	I1204 20:22:51.599992   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 15/120
	I1204 20:22:52.601644   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 16/120
	I1204 20:22:53.602941   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 17/120
	I1204 20:22:54.604271   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 18/120
	I1204 20:22:55.605806   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 19/120
	I1204 20:22:56.608295   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 20/120
	I1204 20:22:57.609827   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 21/120
	I1204 20:22:58.611255   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 22/120
	I1204 20:22:59.612875   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 23/120
	I1204 20:23:00.614110   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 24/120
	I1204 20:23:01.616160   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 25/120
	I1204 20:23:02.617743   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 26/120
	I1204 20:23:03.619014   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 27/120
	I1204 20:23:04.620314   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 28/120
	I1204 20:23:05.621577   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 29/120
	I1204 20:23:06.623778   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 30/120
	I1204 20:23:07.625713   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 31/120
	I1204 20:23:08.627143   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 32/120
	I1204 20:23:09.629300   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 33/120
	I1204 20:23:10.630503   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 34/120
	I1204 20:23:11.632347   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 35/120
	I1204 20:23:12.633903   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 36/120
	I1204 20:23:13.635152   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 37/120
	I1204 20:23:14.636563   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 38/120
	I1204 20:23:15.637790   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 39/120
	I1204 20:23:16.639736   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 40/120
	I1204 20:23:17.641752   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 41/120
	I1204 20:23:18.643334   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 42/120
	I1204 20:23:19.644779   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 43/120
	I1204 20:23:20.646363   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 44/120
	I1204 20:23:21.648457   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 45/120
	I1204 20:23:22.649955   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 46/120
	I1204 20:23:23.651255   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 47/120
	I1204 20:23:24.652690   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 48/120
	I1204 20:23:25.654087   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 49/120
	I1204 20:23:26.656116   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 50/120
	I1204 20:23:27.657370   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 51/120
	I1204 20:23:28.658716   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 52/120
	I1204 20:23:29.659996   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 53/120
	I1204 20:23:30.661819   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 54/120
	I1204 20:23:31.663718   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 55/120
	I1204 20:23:32.665801   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 56/120
	I1204 20:23:33.667777   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 57/120
	I1204 20:23:34.669061   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 58/120
	I1204 20:23:35.670511   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 59/120
	I1204 20:23:36.672839   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 60/120
	I1204 20:23:37.674132   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 61/120
	I1204 20:23:38.675500   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 62/120
	I1204 20:23:39.676617   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 63/120
	I1204 20:23:40.678007   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 64/120
	I1204 20:23:41.679809   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 65/120
	I1204 20:23:42.681389   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 66/120
	I1204 20:23:43.683382   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 67/120
	I1204 20:23:44.684776   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 68/120
	I1204 20:23:45.686332   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 69/120
	I1204 20:23:46.688208   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 70/120
	I1204 20:23:47.690031   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 71/120
	I1204 20:23:48.691259   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 72/120
	I1204 20:23:49.692546   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 73/120
	I1204 20:23:50.693786   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 74/120
	I1204 20:23:51.695794   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 75/120
	I1204 20:23:52.697087   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 76/120
	I1204 20:23:53.698414   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 77/120
	I1204 20:23:54.699854   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 78/120
	I1204 20:23:55.701976   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 79/120
	I1204 20:23:56.704090   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 80/120
	I1204 20:23:57.705444   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 81/120
	I1204 20:23:58.707464   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 82/120
	I1204 20:23:59.709651   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 83/120
	I1204 20:24:00.710893   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 84/120
	I1204 20:24:01.712772   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 85/120
	I1204 20:24:02.714227   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 86/120
	I1204 20:24:03.715757   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 87/120
	I1204 20:24:04.717261   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 88/120
	I1204 20:24:05.718310   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 89/120
	I1204 20:24:06.720578   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 90/120
	I1204 20:24:07.722856   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 91/120
	I1204 20:24:08.724383   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 92/120
	I1204 20:24:09.726129   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 93/120
	I1204 20:24:10.727526   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 94/120
	I1204 20:24:11.729496   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 95/120
	I1204 20:24:12.730735   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 96/120
	I1204 20:24:13.732257   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 97/120
	I1204 20:24:14.733758   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 98/120
	I1204 20:24:15.735449   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 99/120
	I1204 20:24:16.737444   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 100/120
	I1204 20:24:17.738784   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 101/120
	I1204 20:24:18.740037   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 102/120
	I1204 20:24:19.741385   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 103/120
	I1204 20:24:20.742746   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 104/120
	I1204 20:24:21.744449   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 105/120
	I1204 20:24:22.745505   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 106/120
	I1204 20:24:23.746920   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 107/120
	I1204 20:24:24.748252   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 108/120
	I1204 20:24:25.750502   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 109/120
	I1204 20:24:26.753013   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 110/120
	I1204 20:24:27.754500   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 111/120
	I1204 20:24:28.756195   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 112/120
	I1204 20:24:29.757766   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 113/120
	I1204 20:24:30.759253   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 114/120
	I1204 20:24:31.761432   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 115/120
	I1204 20:24:32.763267   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 116/120
	I1204 20:24:33.764789   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 117/120
	I1204 20:24:34.766666   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 118/120
	I1204 20:24:35.767972   36016 main.go:141] libmachine: (ha-739930-m04) Waiting for machine to stop 119/120
	I1204 20:24:36.769106   36016 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 20:24:36.769162   36016 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1204 20:24:36.771127   36016 out.go:201] 
	W1204 20:24:36.772285   36016 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1204 20:24:36.772304   36016 out.go:270] * 
	* 
	W1204 20:24:36.774637   36016 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 20:24:36.775811   36016 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-739930 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
E1204 20:24:52.903609   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr: (18.826110374s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-739930 -n ha-739930
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 logs -n 25: (1.955394352s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m04 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp testdata/cp-test.txt                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt                       |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930 sudo cat                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930.txt                                 |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m02 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n                                                                 | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | ha-739930-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-739930 ssh -n ha-739930-m03 sudo cat                                          | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC | 04 Dec 24 20:12 UTC |
	|         | /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-739930 node stop m02 -v=7                                                     | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-739930 node start m02 -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-739930 -v=7                                                           | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-739930 -v=7                                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-739930 --wait=true -v=7                                                    | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:17 UTC | 04 Dec 24 20:22 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-739930                                                                | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:22 UTC |                     |
	| node    | ha-739930 node delete m03 -v=7                                                   | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:22 UTC | 04 Dec 24 20:22 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-739930 stop -v=7                                                              | ha-739930 | jenkins | v1.34.0 | 04 Dec 24 20:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:17:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:17:26.838279   33690 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:17:26.838931   33690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:17:26.838988   33690 out.go:358] Setting ErrFile to fd 2...
	I1204 20:17:26.839004   33690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:17:26.839465   33690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:17:26.840353   33690 out.go:352] Setting JSON to false
	I1204 20:17:26.841245   33690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3597,"bootTime":1733339850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:17:26.841346   33690 start.go:139] virtualization: kvm guest
	I1204 20:17:26.843459   33690 out.go:177] * [ha-739930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:17:26.844805   33690 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:17:26.844837   33690 notify.go:220] Checking for updates...
	I1204 20:17:26.847501   33690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:17:26.848735   33690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:17:26.849936   33690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:17:26.851096   33690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:17:26.852352   33690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:17:26.853960   33690 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:17:26.854090   33690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:17:26.854533   33690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:17:26.854572   33690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:17:26.870144   33690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I1204 20:17:26.870690   33690 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:17:26.871185   33690 main.go:141] libmachine: Using API Version  1
	I1204 20:17:26.871205   33690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:17:26.871601   33690 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:17:26.871787   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.910512   33690 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:17:26.911723   33690 start.go:297] selected driver: kvm2
	I1204 20:17:26.911741   33690 start.go:901] validating driver "kvm2" against &{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:17:26.911913   33690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:17:26.912253   33690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:17:26.912346   33690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:17:26.927774   33690 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:17:26.928463   33690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:17:26.928502   33690 cni.go:84] Creating CNI manager for ""
	I1204 20:17:26.928567   33690 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 20:17:26.928634   33690 start.go:340] cluster config:
	{Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:17:26.928785   33690 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:17:26.930599   33690 out.go:177] * Starting "ha-739930" primary control-plane node in "ha-739930" cluster
	I1204 20:17:26.931965   33690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:17:26.932004   33690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:17:26.932017   33690 cache.go:56] Caching tarball of preloaded images
	I1204 20:17:26.932091   33690 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:17:26.932104   33690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:17:26.932240   33690 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/config.json ...
	I1204 20:17:26.932483   33690 start.go:360] acquireMachinesLock for ha-739930: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:17:26.932539   33690 start.go:364] duration metric: took 33.838µs to acquireMachinesLock for "ha-739930"
	I1204 20:17:26.932560   33690 start.go:96] Skipping create...Using existing machine configuration
	I1204 20:17:26.932571   33690 fix.go:54] fixHost starting: 
	I1204 20:17:26.932846   33690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:17:26.932889   33690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:17:26.947520   33690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I1204 20:17:26.947987   33690 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:17:26.948497   33690 main.go:141] libmachine: Using API Version  1
	I1204 20:17:26.948523   33690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:17:26.948844   33690 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:17:26.949027   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.949177   33690 main.go:141] libmachine: (ha-739930) Calling .GetState
	I1204 20:17:26.950657   33690 fix.go:112] recreateIfNeeded on ha-739930: state=Running err=<nil>
	W1204 20:17:26.950678   33690 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 20:17:26.953503   33690 out.go:177] * Updating the running kvm2 "ha-739930" VM ...
	I1204 20:17:26.954803   33690 machine.go:93] provisionDockerMachine start ...
	I1204 20:17:26.954821   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:17:26.955003   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:26.957380   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:26.957824   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:26.957852   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:26.958007   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:26.958169   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:26.958312   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:26.958422   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:26.958565   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:26.958743   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:26.958753   33690 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 20:17:27.072207   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:17:27.072234   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.072451   33690 buildroot.go:166] provisioning hostname "ha-739930"
	I1204 20:17:27.072478   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.072666   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.075289   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.075652   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.075687   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.075776   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.075932   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.076077   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.076194   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.076328   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.076540   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.076556   33690 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-739930 && echo "ha-739930" | sudo tee /etc/hostname
	I1204 20:17:27.195565   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-739930
	
	I1204 20:17:27.195593   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.198080   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.198418   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.198446   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.198605   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.198772   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.198904   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.199001   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.199201   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.199429   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.199463   33690 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-739930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-739930/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-739930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:17:27.304283   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:17:27.304321   33690 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:17:27.304338   33690 buildroot.go:174] setting up certificates
	I1204 20:17:27.304348   33690 provision.go:84] configureAuth start
	I1204 20:17:27.304356   33690 main.go:141] libmachine: (ha-739930) Calling .GetMachineName
	I1204 20:17:27.304635   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:17:27.307125   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.307504   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.307524   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.307694   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.309672   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.310015   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.310043   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.310174   33690 provision.go:143] copyHostCerts
	I1204 20:17:27.310214   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:17:27.310288   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:17:27.310306   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:17:27.310375   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:17:27.310475   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:17:27.310495   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:17:27.310504   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:17:27.310533   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:17:27.310588   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:17:27.310605   33690 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:17:27.310609   33690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:17:27.310629   33690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:17:27.310686   33690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.ha-739930 san=[127.0.0.1 192.168.39.183 ha-739930 localhost minikube]
	I1204 20:17:27.713774   33690 provision.go:177] copyRemoteCerts
	I1204 20:17:27.713835   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:17:27.713863   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.716767   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.717125   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.717155   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.717343   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.717522   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.717651   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.717749   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:17:27.797393   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:17:27.797470   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 20:17:27.823088   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:17:27.823168   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:17:27.847270   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:17:27.847335   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:17:27.872086   33690 provision.go:87] duration metric: took 567.718604ms to configureAuth
	I1204 20:17:27.872126   33690 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:17:27.872387   33690 config.go:182] Loaded profile config "ha-739930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:17:27.872463   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:17:27.875020   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.875393   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:17:27.875434   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:17:27.875646   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:17:27.875811   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.875949   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:17:27.876061   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:17:27.876202   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:17:27.876361   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:17:27.876376   33690 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:18:58.735317   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:18:58.735346   33690 machine.go:96] duration metric: took 1m31.780530614s to provisionDockerMachine
	I1204 20:18:58.735361   33690 start.go:293] postStartSetup for "ha-739930" (driver="kvm2")
	I1204 20:18:58.735389   33690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:18:58.735414   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:58.735798   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:18:58.735833   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.740069   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.740625   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.740645   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.740864   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.741083   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.741240   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.741382   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:58.822226   33690 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:18:58.826251   33690 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:18:58.826282   33690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:18:58.826380   33690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:18:58.826474   33690 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:18:58.826487   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:18:58.826600   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:18:58.835608   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:18:58.857928   33690 start.go:296] duration metric: took 122.554322ms for postStartSetup
	I1204 20:18:58.857977   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:58.858274   33690 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 20:18:58.858305   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.860967   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.861380   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.861402   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.861584   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.861750   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.861890   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.862009   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	W1204 20:18:58.941150   33690 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1204 20:18:58.941184   33690 fix.go:56] duration metric: took 1m32.008614445s for fixHost
	I1204 20:18:58.941206   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:58.943969   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.944337   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:58.944358   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:58.944476   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:58.944675   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.944848   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:58.944954   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:58.945112   33690 main.go:141] libmachine: Using SSH client type: native
	I1204 20:18:58.945270   33690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1204 20:18:58.945280   33690 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:18:59.047822   33690 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733343539.018566341
	
	I1204 20:18:59.047853   33690 fix.go:216] guest clock: 1733343539.018566341
	I1204 20:18:59.047865   33690 fix.go:229] Guest: 2024-12-04 20:18:59.018566341 +0000 UTC Remote: 2024-12-04 20:18:58.941192084 +0000 UTC m=+92.140065350 (delta=77.374257ms)
	I1204 20:18:59.047907   33690 fix.go:200] guest clock delta is within tolerance: 77.374257ms
	I1204 20:18:59.047928   33690 start.go:83] releasing machines lock for "ha-739930", held for 1m32.115372791s
	I1204 20:18:59.047961   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.048234   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:18:59.050666   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.051020   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.051048   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.051147   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051752   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051889   33690 main.go:141] libmachine: (ha-739930) Calling .DriverName
	I1204 20:18:59.051949   33690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:18:59.051996   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:59.052062   33690 ssh_runner.go:195] Run: cat /version.json
	I1204 20:18:59.052083   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHHostname
	I1204 20:18:59.054644   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.054781   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055014   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.055034   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055156   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:59.055308   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:18:59.055335   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:18:59.055336   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:59.055501   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHPort
	I1204 20:18:59.055502   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:59.055682   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHKeyPath
	I1204 20:18:59.055683   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:59.055809   33690 main.go:141] libmachine: (ha-739930) Calling .GetSSHUsername
	I1204 20:18:59.055920   33690 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/ha-739930/id_rsa Username:docker}
	I1204 20:18:59.152009   33690 ssh_runner.go:195] Run: systemctl --version
	I1204 20:18:59.157898   33690 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:18:59.318190   33690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:18:59.323664   33690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:18:59.323739   33690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:18:59.332384   33690 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 20:18:59.332412   33690 start.go:495] detecting cgroup driver to use...
	I1204 20:18:59.332483   33690 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:18:59.347821   33690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:18:59.361658   33690 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:18:59.361716   33690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:18:59.374964   33690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:18:59.388212   33690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:18:59.541439   33690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:18:59.687720   33690 docker.go:233] disabling docker service ...
	I1204 20:18:59.687795   33690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:18:59.703016   33690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:18:59.715634   33690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:18:59.853659   33690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:18:59.992167   33690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:19:00.005217   33690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:19:00.021900   33690 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:19:00.021946   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.031205   33690 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:19:00.031245   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.040226   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.049236   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.058143   33690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:19:00.067216   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.076313   33690 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.085824   33690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:19:00.094659   33690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:19:00.102761   33690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:19:00.110630   33690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:19:00.244771   33690 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:19:03.773378   33690 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.528570922s)
	I1204 20:19:03.773413   33690 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:19:03.773466   33690 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:19:03.778736   33690 start.go:563] Will wait 60s for crictl version
	I1204 20:19:03.778791   33690 ssh_runner.go:195] Run: which crictl
	I1204 20:19:03.782248   33690 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:19:03.820809   33690 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:19:03.820895   33690 ssh_runner.go:195] Run: crio --version
	I1204 20:19:03.848968   33690 ssh_runner.go:195] Run: crio --version
	I1204 20:19:03.878052   33690 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:19:03.879120   33690 main.go:141] libmachine: (ha-739930) Calling .GetIP
	I1204 20:19:03.881626   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:19:03.881973   33690 main.go:141] libmachine: (ha-739930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:91:f7", ip: ""} in network mk-ha-739930: {Iface:virbr1 ExpiryTime:2024-12-04 21:08:26 +0000 UTC Type:0 Mac:52:54:00:b9:91:f7 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-739930 Clientid:01:52:54:00:b9:91:f7}
	I1204 20:19:03.881995   33690 main.go:141] libmachine: (ha-739930) DBG | domain ha-739930 has defined IP address 192.168.39.183 and MAC address 52:54:00:b9:91:f7 in network mk-ha-739930
	I1204 20:19:03.882219   33690 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:19:03.886435   33690 kubeadm.go:883] updating cluster {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:19:03.886565   33690 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:19:03.886605   33690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:19:03.928761   33690 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:19:03.928783   33690 crio.go:433] Images already preloaded, skipping extraction
	I1204 20:19:03.928831   33690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:19:03.968183   33690 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:19:03.968206   33690 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:19:03.968216   33690 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1204 20:19:03.968339   33690 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-739930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:19:03.968425   33690 ssh_runner.go:195] Run: crio config
	I1204 20:19:04.021762   33690 cni.go:84] Creating CNI manager for ""
	I1204 20:19:04.021784   33690 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 20:19:04.021793   33690 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:19:04.021820   33690 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-739930 NodeName:ha-739930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:19:04.021939   33690 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-739930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:19:04.021969   33690 kube-vip.go:115] generating kube-vip config ...
	I1204 20:19:04.022032   33690 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 20:19:04.035593   33690 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 20:19:04.035699   33690 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 20:19:04.035753   33690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:19:04.046759   33690 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:19:04.046816   33690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 20:19:04.057506   33690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 20:19:04.075674   33690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:19:04.093074   33690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 20:19:04.110443   33690 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 20:19:04.131447   33690 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 20:19:04.135293   33690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:19:04.290872   33690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:19:04.305586   33690 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930 for IP: 192.168.39.183
	I1204 20:19:04.305604   33690 certs.go:194] generating shared ca certs ...
	I1204 20:19:04.305620   33690 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.305751   33690 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:19:04.305786   33690 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:19:04.305795   33690 certs.go:256] generating profile certs ...
	I1204 20:19:04.305877   33690 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/client.key
	I1204 20:19:04.305902   33690 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09
	I1204 20:19:04.305918   33690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.216 192.168.39.176 192.168.39.254]
	I1204 20:19:04.451041   33690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 ...
	I1204 20:19:04.451068   33690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09: {Name:mk16e7be0ff2316006f5ebbb4fe1bebbdfd2402c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.451224   33690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09 ...
	I1204 20:19:04.451238   33690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09: {Name:mk85caecf29bd297713ac4620137696fa5929bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:19:04.451308   33690 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt.ce4cce09 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt
	I1204 20:19:04.451479   33690 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key.ce4cce09 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key
	I1204 20:19:04.451607   33690 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key
	I1204 20:19:04.451621   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:19:04.451633   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:19:04.451644   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:19:04.451657   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:19:04.451668   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:19:04.451683   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:19:04.451697   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:19:04.451709   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:19:04.451759   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:19:04.451785   33690 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:19:04.451794   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:19:04.451815   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:19:04.451837   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:19:04.451861   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:19:04.451896   33690 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:19:04.451921   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.451934   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.451946   33690 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.452494   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:19:04.476410   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:19:04.498566   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:19:04.520191   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:19:04.542641   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 20:19:04.564741   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:19:04.616981   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:19:04.639840   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/ha-739930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:19:04.662595   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:19:04.685102   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:19:04.707291   33690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:19:04.729490   33690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:19:04.745513   33690 ssh_runner.go:195] Run: openssl version
	I1204 20:19:04.751213   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:19:04.761527   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.765703   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.765755   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:19:04.771576   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:19:04.780734   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:19:04.790805   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.795384   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.795437   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:19:04.801060   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:19:04.809818   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:19:04.820437   33690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.824493   33690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.824533   33690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:19:04.829644   33690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:19:04.838786   33690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:19:04.842832   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 20:19:04.848009   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 20:19:04.853152   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 20:19:04.858321   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 20:19:04.863607   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 20:19:04.868914   33690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 20:19:04.873969   33690 kubeadm.go:392] StartCluster: {Name:ha-739930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-739930 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.230 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:19:04.874087   33690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:19:04.874143   33690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:19:04.915148   33690 cri.go:89] found id: "8359c26dd609bc59a903acfcddc73ca648dfaa665c5be16b606a76f7eefa3509"
	I1204 20:19:04.915174   33690 cri.go:89] found id: "6f73139f266be25d4cb4cf40538e3c43757e16842e72c5ebbe88906ba06d569f"
	I1204 20:19:04.915187   33690 cri.go:89] found id: "3c0fda9cb8d0dc2ac3cb2369bf98d2e80fe40f802999d0e439cbe45df7ca065e"
	I1204 20:19:04.915190   33690 cri.go:89] found id: "6f5406ed09eb5158af76bfc4e907abf990cc69eb07b3420c5fb8417585aeb593"
	I1204 20:19:04.915194   33690 cri.go:89] found id: "92f0436c068d37f00d41a848d30e7457ee048433b86098444bdaf1dac7c4ae50"
	I1204 20:19:04.915197   33690 cri.go:89] found id: "ab16b32e60a7287ff4948151ca59846f512d2a31828295582ecaf061d7dd0cac"
	I1204 20:19:04.915200   33690 cri.go:89] found id: "f38276fe657c7e64c36f5e7048dd53d1f38f2a70a523fca08ac6aba6639b37e7"
	I1204 20:19:04.915202   33690 cri.go:89] found id: "8643b775b5352f9000b818ffdccfc9b8d9ce8d3bebf02d3707ef0c598107b627"
	I1204 20:19:04.915205   33690 cri.go:89] found id: "b4a22468ef5bdbd7670b4b9d102217e2f59637e4fb99fa6b968fc2f29ad8208b"
	I1204 20:19:04.915210   33690 cri.go:89] found id: "325ac1400e34aa08998a037b7bad43b257bdf9daf9a87fbce57d6eef87a7bef7"
	I1204 20:19:04.915216   33690 cri.go:89] found id: "1fdab5e7f0c119181d690a0296a5d0d8ba1871661cadaa54b8d022c0a1b668e3"
	I1204 20:19:04.915218   33690 cri.go:89] found id: "52571ff875ebe7e2bae93811588ab15bcc178c9e1c0334570224e1b2bd359246"
	I1204 20:19:04.915221   33690 cri.go:89] found id: "c2343748d9b3c27471f4dc81bc815b3b7cfa628a41f8708ffaeec870bf0c05f4"
	I1204 20:19:04.915224   33690 cri.go:89] found id: ""
	I1204 20:19:04.915261   33690 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-739930 -n ha-739930
helpers_test.go:261: (dbg) Run:  kubectl --context ha-739930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-980367
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-980367
E1204 20:39:52.904092   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-980367: exit status 82 (2m1.832122648s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-980367-m03"  ...
	* Stopping node "multinode-980367-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-980367" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-980367 --wait=true -v=8 --alsologtostderr
E1204 20:42:26.275547   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:44:52.903425   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-980367 --wait=true -v=8 --alsologtostderr: (3m27.655239306s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-980367
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-980367 -n multinode-980367
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 logs -n 25: (1.941091554s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367:/home/docker/cp-test_multinode-980367-m02_multinode-980367.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367 sudo cat                                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m02_multinode-980367.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03:/home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367-m03 sudo cat                                   | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp testdata/cp-test.txt                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367:/home/docker/cp-test_multinode-980367-m03_multinode-980367.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367 sudo cat                                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02:/home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367-m02 sudo cat                                   | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-980367 node stop m03                                                          | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	| node    | multinode-980367 node start                                                             | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| stop    | -p multinode-980367                                                                     | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| start   | -p multinode-980367                                                                     | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:41 UTC | 04 Dec 24 20:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:44 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:41:30
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:41:30.364118   46101 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:41:30.364252   46101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:41:30.364263   46101 out.go:358] Setting ErrFile to fd 2...
	I1204 20:41:30.364269   46101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:41:30.364467   46101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:41:30.364971   46101 out.go:352] Setting JSON to false
	I1204 20:41:30.365852   46101 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5040,"bootTime":1733339850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:41:30.365948   46101 start.go:139] virtualization: kvm guest
	I1204 20:41:30.368749   46101 out.go:177] * [multinode-980367] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:41:30.370398   46101 notify.go:220] Checking for updates...
	I1204 20:41:30.370408   46101 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:41:30.371620   46101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:41:30.373289   46101 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:41:30.374932   46101 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:41:30.376013   46101 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:41:30.377128   46101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:41:30.378536   46101 config.go:182] Loaded profile config "multinode-980367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:41:30.378618   46101 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:41:30.379037   46101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:41:30.379087   46101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:41:30.393622   46101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1204 20:41:30.394012   46101 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:41:30.394565   46101 main.go:141] libmachine: Using API Version  1
	I1204 20:41:30.394592   46101 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:41:30.394926   46101 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:41:30.395110   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.427656   46101 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:41:30.428999   46101 start.go:297] selected driver: kvm2
	I1204 20:41:30.429012   46101 start.go:901] validating driver "kvm2" against &{Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:41:30.429138   46101 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:41:30.429437   46101 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:41:30.429504   46101 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:41:30.443264   46101 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:41:30.443928   46101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:41:30.443959   46101 cni.go:84] Creating CNI manager for ""
	I1204 20:41:30.444020   46101 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1204 20:41:30.444084   46101 start.go:340] cluster config:
	{Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:41:30.444228   46101 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:41:30.445828   46101 out.go:177] * Starting "multinode-980367" primary control-plane node in "multinode-980367" cluster
	I1204 20:41:30.447126   46101 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:41:30.447152   46101 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:41:30.447158   46101 cache.go:56] Caching tarball of preloaded images
	I1204 20:41:30.447254   46101 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:41:30.447269   46101 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:41:30.447366   46101 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/config.json ...
	I1204 20:41:30.447578   46101 start.go:360] acquireMachinesLock for multinode-980367: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:41:30.447623   46101 start.go:364] duration metric: took 26.612µs to acquireMachinesLock for "multinode-980367"
	I1204 20:41:30.447642   46101 start.go:96] Skipping create...Using existing machine configuration
	I1204 20:41:30.447650   46101 fix.go:54] fixHost starting: 
	I1204 20:41:30.447890   46101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:41:30.447924   46101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:41:30.461060   46101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I1204 20:41:30.461441   46101 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:41:30.461956   46101 main.go:141] libmachine: Using API Version  1
	I1204 20:41:30.461975   46101 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:41:30.462242   46101 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:41:30.462414   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.462531   46101 main.go:141] libmachine: (multinode-980367) Calling .GetState
	I1204 20:41:30.463995   46101 fix.go:112] recreateIfNeeded on multinode-980367: state=Running err=<nil>
	W1204 20:41:30.464023   46101 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 20:41:30.465673   46101 out.go:177] * Updating the running kvm2 "multinode-980367" VM ...
	I1204 20:41:30.466720   46101 machine.go:93] provisionDockerMachine start ...
	I1204 20:41:30.466738   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.466885   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.469230   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.469633   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.469660   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.469797   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.469934   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.470078   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.470170   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.470322   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.470531   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.470546   46101 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 20:41:30.575982   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-980367
	
	I1204 20:41:30.576019   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.576195   46101 buildroot.go:166] provisioning hostname "multinode-980367"
	I1204 20:41:30.576217   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.576403   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.578926   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.579285   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.579304   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.579447   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.579597   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.579728   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.579845   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.579979   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.580126   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.580138   46101 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-980367 && echo "multinode-980367" | sudo tee /etc/hostname
	I1204 20:41:30.693343   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-980367
	
	I1204 20:41:30.693376   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.695982   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.696302   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.696323   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.696510   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.696668   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.696808   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.696915   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.697029   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.697174   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.697189   46101 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-980367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-980367/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-980367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:41:30.795808   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:41:30.795853   46101 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:41:30.795886   46101 buildroot.go:174] setting up certificates
	I1204 20:41:30.795898   46101 provision.go:84] configureAuth start
	I1204 20:41:30.795915   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.796194   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:41:30.798764   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.799213   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.799262   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.799357   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.801686   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.802066   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.802096   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.802218   46101 provision.go:143] copyHostCerts
	I1204 20:41:30.802246   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:41:30.802280   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:41:30.802289   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:41:30.802355   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:41:30.802441   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:41:30.802466   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:41:30.802473   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:41:30.802497   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:41:30.802552   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:41:30.802569   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:41:30.802574   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:41:30.802599   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:41:30.802700   46101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.multinode-980367 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-980367]
	I1204 20:41:31.020771   46101 provision.go:177] copyRemoteCerts
	I1204 20:41:31.020851   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:41:31.020876   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:31.023479   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.023822   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:31.023844   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.024050   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:31.024224   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.024373   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:31.024493   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:41:31.101133   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:41:31.101209   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:41:31.125886   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:41:31.125952   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1204 20:41:31.149646   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:41:31.149730   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:41:31.174550   46101 provision.go:87] duration metric: took 378.635665ms to configureAuth
	I1204 20:41:31.174583   46101 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:41:31.174837   46101 config.go:182] Loaded profile config "multinode-980367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:41:31.174923   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:31.177288   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.177660   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:31.177710   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.177877   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:31.178056   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.178174   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.178328   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:31.178472   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:31.178628   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:31.178642   46101 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:43:01.926895   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:43:01.926929   46101 machine.go:96] duration metric: took 1m31.460195118s to provisionDockerMachine
	I1204 20:43:01.926942   46101 start.go:293] postStartSetup for "multinode-980367" (driver="kvm2")
	I1204 20:43:01.926953   46101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:43:01.926986   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:01.927328   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:43:01.927364   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:01.930522   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:01.931000   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:01.931033   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:01.931237   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:01.931421   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:01.931586   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:01.931716   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.010441   46101 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:43:02.014340   46101 command_runner.go:130] > NAME=Buildroot
	I1204 20:43:02.014395   46101 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1204 20:43:02.014404   46101 command_runner.go:130] > ID=buildroot
	I1204 20:43:02.014412   46101 command_runner.go:130] > VERSION_ID=2023.02.9
	I1204 20:43:02.014419   46101 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1204 20:43:02.014472   46101 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:43:02.014495   46101 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:43:02.014566   46101 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:43:02.014647   46101 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:43:02.014658   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:43:02.014743   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:43:02.024687   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:43:02.047962   46101 start.go:296] duration metric: took 121.006853ms for postStartSetup
	I1204 20:43:02.048001   46101 fix.go:56] duration metric: took 1m31.600351476s for fixHost
	I1204 20:43:02.048021   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.050883   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.051276   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.051304   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.051492   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.051682   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.051862   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.052051   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.052231   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:43:02.052435   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:43:02.052446   46101 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:43:02.148010   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733344982.113341650
	
	I1204 20:43:02.148032   46101 fix.go:216] guest clock: 1733344982.113341650
	I1204 20:43:02.148042   46101 fix.go:229] Guest: 2024-12-04 20:43:02.11334165 +0000 UTC Remote: 2024-12-04 20:43:02.048005204 +0000 UTC m=+91.718994206 (delta=65.336446ms)
	I1204 20:43:02.148092   46101 fix.go:200] guest clock delta is within tolerance: 65.336446ms
	I1204 20:43:02.148098   46101 start.go:83] releasing machines lock for "multinode-980367", held for 1m31.700463454s
	I1204 20:43:02.148120   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.148366   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:43:02.150891   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.151324   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.151344   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.151556   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152122   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152302   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152383   46101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:43:02.152439   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.152521   46101 ssh_runner.go:195] Run: cat /version.json
	I1204 20:43:02.152550   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.155141   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155396   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155569   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.155593   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155723   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.155887   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.155904   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.155923   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.156097   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.156097   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.156270   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.156271   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.156434   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.156552   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.236444   46101 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1204 20:43:02.236864   46101 ssh_runner.go:195] Run: systemctl --version
	I1204 20:43:02.262624   46101 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1204 20:43:02.263386   46101 command_runner.go:130] > systemd 252 (252)
	I1204 20:43:02.263429   46101 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1204 20:43:02.263492   46101 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:43:02.432108   46101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 20:43:02.443291   46101 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1204 20:43:02.443567   46101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:43:02.443651   46101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:43:02.454403   46101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 20:43:02.454426   46101 start.go:495] detecting cgroup driver to use...
	I1204 20:43:02.454492   46101 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:43:02.472476   46101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:43:02.487488   46101 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:43:02.487551   46101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:43:02.501596   46101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:43:02.516133   46101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:43:02.691095   46101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:43:02.843474   46101 docker.go:233] disabling docker service ...
	I1204 20:43:02.843542   46101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:43:02.860646   46101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:43:02.873907   46101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:43:03.015535   46101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:43:03.184744   46101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:43:03.211147   46101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:43:03.237439   46101 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1204 20:43:03.237498   46101 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:43:03.237582   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.257335   46101 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:43:03.257402   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.288865   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.302989   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.323913   46101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:43:03.336568   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.350483   46101 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.363200   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.375080   46101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:43:03.384338   46101 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1204 20:43:03.384585   46101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:43:03.393441   46101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:43:03.542407   46101 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:43:13.607542   46101 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.065091128s)
	I1204 20:43:13.607572   46101 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:43:13.607617   46101 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:43:13.612772   46101 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1204 20:43:13.612801   46101 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1204 20:43:13.612810   46101 command_runner.go:130] > Device: 0,22	Inode: 1337        Links: 1
	I1204 20:43:13.612820   46101 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1204 20:43:13.612828   46101 command_runner.go:130] > Access: 2024-12-04 20:43:13.482201422 +0000
	I1204 20:43:13.612850   46101 command_runner.go:130] > Modify: 2024-12-04 20:43:13.440198115 +0000
	I1204 20:43:13.612858   46101 command_runner.go:130] > Change: 2024-12-04 20:43:13.440198115 +0000
	I1204 20:43:13.612866   46101 command_runner.go:130] >  Birth: -
	I1204 20:43:13.612899   46101 start.go:563] Will wait 60s for crictl version
	I1204 20:43:13.612948   46101 ssh_runner.go:195] Run: which crictl
	I1204 20:43:13.616561   46101 command_runner.go:130] > /usr/bin/crictl
	I1204 20:43:13.616633   46101 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:43:13.657567   46101 command_runner.go:130] > Version:  0.1.0
	I1204 20:43:13.657596   46101 command_runner.go:130] > RuntimeName:  cri-o
	I1204 20:43:13.657603   46101 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1204 20:43:13.657611   46101 command_runner.go:130] > RuntimeApiVersion:  v1
	I1204 20:43:13.657665   46101 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:43:13.657748   46101 ssh_runner.go:195] Run: crio --version
	I1204 20:43:13.685984   46101 command_runner.go:130] > crio version 1.29.1
	I1204 20:43:13.686016   46101 command_runner.go:130] > Version:        1.29.1
	I1204 20:43:13.686025   46101 command_runner.go:130] > GitCommit:      unknown
	I1204 20:43:13.686030   46101 command_runner.go:130] > GitCommitDate:  unknown
	I1204 20:43:13.686036   46101 command_runner.go:130] > GitTreeState:   clean
	I1204 20:43:13.686044   46101 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1204 20:43:13.686050   46101 command_runner.go:130] > GoVersion:      go1.21.6
	I1204 20:43:13.686055   46101 command_runner.go:130] > Compiler:       gc
	I1204 20:43:13.686062   46101 command_runner.go:130] > Platform:       linux/amd64
	I1204 20:43:13.686068   46101 command_runner.go:130] > Linkmode:       dynamic
	I1204 20:43:13.686075   46101 command_runner.go:130] > BuildTags:      
	I1204 20:43:13.686081   46101 command_runner.go:130] >   containers_image_ostree_stub
	I1204 20:43:13.686088   46101 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1204 20:43:13.686095   46101 command_runner.go:130] >   btrfs_noversion
	I1204 20:43:13.686108   46101 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1204 20:43:13.686117   46101 command_runner.go:130] >   libdm_no_deferred_remove
	I1204 20:43:13.686123   46101 command_runner.go:130] >   seccomp
	I1204 20:43:13.686133   46101 command_runner.go:130] > LDFlags:          unknown
	I1204 20:43:13.686137   46101 command_runner.go:130] > SeccompEnabled:   true
	I1204 20:43:13.686145   46101 command_runner.go:130] > AppArmorEnabled:  false
	I1204 20:43:13.686222   46101 ssh_runner.go:195] Run: crio --version
	I1204 20:43:13.713718   46101 command_runner.go:130] > crio version 1.29.1
	I1204 20:43:13.713765   46101 command_runner.go:130] > Version:        1.29.1
	I1204 20:43:13.713773   46101 command_runner.go:130] > GitCommit:      unknown
	I1204 20:43:13.713778   46101 command_runner.go:130] > GitCommitDate:  unknown
	I1204 20:43:13.713782   46101 command_runner.go:130] > GitTreeState:   clean
	I1204 20:43:13.713787   46101 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1204 20:43:13.713791   46101 command_runner.go:130] > GoVersion:      go1.21.6
	I1204 20:43:13.713795   46101 command_runner.go:130] > Compiler:       gc
	I1204 20:43:13.713802   46101 command_runner.go:130] > Platform:       linux/amd64
	I1204 20:43:13.713809   46101 command_runner.go:130] > Linkmode:       dynamic
	I1204 20:43:13.713815   46101 command_runner.go:130] > BuildTags:      
	I1204 20:43:13.713836   46101 command_runner.go:130] >   containers_image_ostree_stub
	I1204 20:43:13.713844   46101 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1204 20:43:13.713851   46101 command_runner.go:130] >   btrfs_noversion
	I1204 20:43:13.713861   46101 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1204 20:43:13.713867   46101 command_runner.go:130] >   libdm_no_deferred_remove
	I1204 20:43:13.713871   46101 command_runner.go:130] >   seccomp
	I1204 20:43:13.713875   46101 command_runner.go:130] > LDFlags:          unknown
	I1204 20:43:13.713880   46101 command_runner.go:130] > SeccompEnabled:   true
	I1204 20:43:13.713889   46101 command_runner.go:130] > AppArmorEnabled:  false
	I1204 20:43:13.716491   46101 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:43:13.717729   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:43:13.720764   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:13.721182   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:13.721208   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:13.721396   46101 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:43:13.725398   46101 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1204 20:43:13.725489   46101 kubeadm.go:883] updating cluster {Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:43:13.725613   46101 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:43:13.725662   46101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:43:13.765695   46101 command_runner.go:130] > {
	I1204 20:43:13.765723   46101 command_runner.go:130] >   "images": [
	I1204 20:43:13.765727   46101 command_runner.go:130] >     {
	I1204 20:43:13.765755   46101 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1204 20:43:13.765762   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765770   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1204 20:43:13.765773   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765777   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765786   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1204 20:43:13.765793   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1204 20:43:13.765797   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765801   46101 command_runner.go:130] >       "size": "94965812",
	I1204 20:43:13.765805   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765810   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765817   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765824   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765828   46101 command_runner.go:130] >     },
	I1204 20:43:13.765831   46101 command_runner.go:130] >     {
	I1204 20:43:13.765837   46101 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1204 20:43:13.765842   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765847   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1204 20:43:13.765851   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765858   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765864   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1204 20:43:13.765871   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1204 20:43:13.765875   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765880   46101 command_runner.go:130] >       "size": "94958644",
	I1204 20:43:13.765885   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765891   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765896   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765900   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765903   46101 command_runner.go:130] >     },
	I1204 20:43:13.765907   46101 command_runner.go:130] >     {
	I1204 20:43:13.765913   46101 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1204 20:43:13.765920   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765924   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1204 20:43:13.765928   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765932   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765939   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1204 20:43:13.765946   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1204 20:43:13.765950   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765954   46101 command_runner.go:130] >       "size": "1363676",
	I1204 20:43:13.765958   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765962   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765966   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765971   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765974   46101 command_runner.go:130] >     },
	I1204 20:43:13.765978   46101 command_runner.go:130] >     {
	I1204 20:43:13.765986   46101 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1204 20:43:13.765990   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765995   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1204 20:43:13.766001   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766005   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766012   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1204 20:43:13.766025   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1204 20:43:13.766030   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766035   46101 command_runner.go:130] >       "size": "31470524",
	I1204 20:43:13.766046   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766054   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766059   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766065   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766069   46101 command_runner.go:130] >     },
	I1204 20:43:13.766072   46101 command_runner.go:130] >     {
	I1204 20:43:13.766079   46101 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1204 20:43:13.766086   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766091   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1204 20:43:13.766097   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766101   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766108   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1204 20:43:13.766117   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1204 20:43:13.766122   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766126   46101 command_runner.go:130] >       "size": "63273227",
	I1204 20:43:13.766131   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766137   46101 command_runner.go:130] >       "username": "nonroot",
	I1204 20:43:13.766161   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766165   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766171   46101 command_runner.go:130] >     },
	I1204 20:43:13.766174   46101 command_runner.go:130] >     {
	I1204 20:43:13.766180   46101 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1204 20:43:13.766184   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766189   46101 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1204 20:43:13.766192   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766196   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766205   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1204 20:43:13.766212   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1204 20:43:13.766218   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766222   46101 command_runner.go:130] >       "size": "149009664",
	I1204 20:43:13.766226   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766230   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766234   46101 command_runner.go:130] >       },
	I1204 20:43:13.766238   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766242   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766247   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766253   46101 command_runner.go:130] >     },
	I1204 20:43:13.766258   46101 command_runner.go:130] >     {
	I1204 20:43:13.766266   46101 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1204 20:43:13.766271   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766276   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1204 20:43:13.766282   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766286   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766293   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1204 20:43:13.766302   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1204 20:43:13.766306   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766310   46101 command_runner.go:130] >       "size": "95274464",
	I1204 20:43:13.766316   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766319   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766323   46101 command_runner.go:130] >       },
	I1204 20:43:13.766329   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766333   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766339   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766342   46101 command_runner.go:130] >     },
	I1204 20:43:13.766347   46101 command_runner.go:130] >     {
	I1204 20:43:13.766353   46101 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1204 20:43:13.766359   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766364   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1204 20:43:13.766370   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766374   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766387   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1204 20:43:13.766402   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1204 20:43:13.766406   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766410   46101 command_runner.go:130] >       "size": "89474374",
	I1204 20:43:13.766413   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766417   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766421   46101 command_runner.go:130] >       },
	I1204 20:43:13.766425   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766429   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766433   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766436   46101 command_runner.go:130] >     },
	I1204 20:43:13.766439   46101 command_runner.go:130] >     {
	I1204 20:43:13.766445   46101 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1204 20:43:13.766449   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766453   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1204 20:43:13.766457   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766464   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766471   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1204 20:43:13.766478   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1204 20:43:13.766483   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766488   46101 command_runner.go:130] >       "size": "92783513",
	I1204 20:43:13.766492   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766496   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766503   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766507   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766510   46101 command_runner.go:130] >     },
	I1204 20:43:13.766514   46101 command_runner.go:130] >     {
	I1204 20:43:13.766520   46101 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1204 20:43:13.766524   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766529   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1204 20:43:13.766535   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766539   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766547   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1204 20:43:13.766556   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1204 20:43:13.766560   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766564   46101 command_runner.go:130] >       "size": "68457798",
	I1204 20:43:13.766568   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766571   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766575   46101 command_runner.go:130] >       },
	I1204 20:43:13.766579   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766583   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766587   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766591   46101 command_runner.go:130] >     },
	I1204 20:43:13.766594   46101 command_runner.go:130] >     {
	I1204 20:43:13.766600   46101 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1204 20:43:13.766607   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766611   46101 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1204 20:43:13.766614   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766618   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766624   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1204 20:43:13.766634   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1204 20:43:13.766637   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766641   46101 command_runner.go:130] >       "size": "742080",
	I1204 20:43:13.766644   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766648   46101 command_runner.go:130] >         "value": "65535"
	I1204 20:43:13.766652   46101 command_runner.go:130] >       },
	I1204 20:43:13.766655   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766659   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766663   46101 command_runner.go:130] >       "pinned": true
	I1204 20:43:13.766668   46101 command_runner.go:130] >     }
	I1204 20:43:13.766671   46101 command_runner.go:130] >   ]
	I1204 20:43:13.766674   46101 command_runner.go:130] > }
	I1204 20:43:13.767562   46101 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:43:13.767582   46101 crio.go:433] Images already preloaded, skipping extraction
	I1204 20:43:13.767629   46101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:43:13.799639   46101 command_runner.go:130] > {
	I1204 20:43:13.799662   46101 command_runner.go:130] >   "images": [
	I1204 20:43:13.799668   46101 command_runner.go:130] >     {
	I1204 20:43:13.799675   46101 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1204 20:43:13.799682   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799694   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1204 20:43:13.799702   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799708   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799721   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1204 20:43:13.799731   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1204 20:43:13.799737   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799745   46101 command_runner.go:130] >       "size": "94965812",
	I1204 20:43:13.799751   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799758   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799773   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.799780   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.799786   46101 command_runner.go:130] >     },
	I1204 20:43:13.799792   46101 command_runner.go:130] >     {
	I1204 20:43:13.799802   46101 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1204 20:43:13.799809   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799820   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1204 20:43:13.799830   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799836   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799846   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1204 20:43:13.799861   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1204 20:43:13.799867   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799874   46101 command_runner.go:130] >       "size": "94958644",
	I1204 20:43:13.799881   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799891   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799899   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.799907   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.799916   46101 command_runner.go:130] >     },
	I1204 20:43:13.799919   46101 command_runner.go:130] >     {
	I1204 20:43:13.799926   46101 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1204 20:43:13.799931   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799937   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1204 20:43:13.799940   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799945   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799965   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1204 20:43:13.799975   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1204 20:43:13.799978   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799982   46101 command_runner.go:130] >       "size": "1363676",
	I1204 20:43:13.799986   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799990   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799997   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800003   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800006   46101 command_runner.go:130] >     },
	I1204 20:43:13.800012   46101 command_runner.go:130] >     {
	I1204 20:43:13.800018   46101 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1204 20:43:13.800025   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800030   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1204 20:43:13.800036   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800041   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800052   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1204 20:43:13.800066   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1204 20:43:13.800072   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800077   46101 command_runner.go:130] >       "size": "31470524",
	I1204 20:43:13.800084   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800088   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800095   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800098   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800104   46101 command_runner.go:130] >     },
	I1204 20:43:13.800108   46101 command_runner.go:130] >     {
	I1204 20:43:13.800116   46101 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1204 20:43:13.800121   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800126   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1204 20:43:13.800132   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800135   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800145   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1204 20:43:13.800153   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1204 20:43:13.800157   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800161   46101 command_runner.go:130] >       "size": "63273227",
	I1204 20:43:13.800164   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800168   46101 command_runner.go:130] >       "username": "nonroot",
	I1204 20:43:13.800172   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800179   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800183   46101 command_runner.go:130] >     },
	I1204 20:43:13.800190   46101 command_runner.go:130] >     {
	I1204 20:43:13.800196   46101 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1204 20:43:13.800202   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800207   46101 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1204 20:43:13.800213   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800218   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800227   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1204 20:43:13.800236   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1204 20:43:13.800242   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800247   46101 command_runner.go:130] >       "size": "149009664",
	I1204 20:43:13.800254   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800258   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800264   46101 command_runner.go:130] >       },
	I1204 20:43:13.800270   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800274   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800280   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800284   46101 command_runner.go:130] >     },
	I1204 20:43:13.800292   46101 command_runner.go:130] >     {
	I1204 20:43:13.800298   46101 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1204 20:43:13.800304   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800309   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1204 20:43:13.800315   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800319   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800329   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1204 20:43:13.800338   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1204 20:43:13.800344   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800349   46101 command_runner.go:130] >       "size": "95274464",
	I1204 20:43:13.800355   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800358   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800364   46101 command_runner.go:130] >       },
	I1204 20:43:13.800368   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800374   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800377   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800383   46101 command_runner.go:130] >     },
	I1204 20:43:13.800386   46101 command_runner.go:130] >     {
	I1204 20:43:13.800394   46101 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1204 20:43:13.800400   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800405   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1204 20:43:13.800410   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800414   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800430   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1204 20:43:13.800444   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1204 20:43:13.800450   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800455   46101 command_runner.go:130] >       "size": "89474374",
	I1204 20:43:13.800462   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800465   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800471   46101 command_runner.go:130] >       },
	I1204 20:43:13.800476   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800482   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800486   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800492   46101 command_runner.go:130] >     },
	I1204 20:43:13.800495   46101 command_runner.go:130] >     {
	I1204 20:43:13.800503   46101 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1204 20:43:13.800507   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800514   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1204 20:43:13.800517   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800524   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800531   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1204 20:43:13.800542   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1204 20:43:13.800548   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800552   46101 command_runner.go:130] >       "size": "92783513",
	I1204 20:43:13.800558   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800562   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800567   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800572   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800578   46101 command_runner.go:130] >     },
	I1204 20:43:13.800581   46101 command_runner.go:130] >     {
	I1204 20:43:13.800589   46101 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1204 20:43:13.800594   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800601   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1204 20:43:13.800604   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800609   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800618   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1204 20:43:13.800627   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1204 20:43:13.800634   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800639   46101 command_runner.go:130] >       "size": "68457798",
	I1204 20:43:13.800645   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800649   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800655   46101 command_runner.go:130] >       },
	I1204 20:43:13.800659   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800665   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800669   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800675   46101 command_runner.go:130] >     },
	I1204 20:43:13.800678   46101 command_runner.go:130] >     {
	I1204 20:43:13.800684   46101 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1204 20:43:13.800691   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800695   46101 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1204 20:43:13.800701   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800705   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800714   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1204 20:43:13.800722   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1204 20:43:13.800728   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800732   46101 command_runner.go:130] >       "size": "742080",
	I1204 20:43:13.800739   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800742   46101 command_runner.go:130] >         "value": "65535"
	I1204 20:43:13.800748   46101 command_runner.go:130] >       },
	I1204 20:43:13.800752   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800758   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800762   46101 command_runner.go:130] >       "pinned": true
	I1204 20:43:13.800765   46101 command_runner.go:130] >     }
	I1204 20:43:13.800771   46101 command_runner.go:130] >   ]
	I1204 20:43:13.800774   46101 command_runner.go:130] > }
	I1204 20:43:13.800884   46101 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:43:13.800895   46101 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:43:13.800902   46101 kubeadm.go:934] updating node { 192.168.39.127 8443 v1.31.2 crio true true} ...
	I1204 20:43:13.800990   46101 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-980367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:43:13.801065   46101 ssh_runner.go:195] Run: crio config
	I1204 20:43:13.840352   46101 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1204 20:43:13.840380   46101 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1204 20:43:13.840387   46101 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1204 20:43:13.840390   46101 command_runner.go:130] > #
	I1204 20:43:13.840397   46101 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1204 20:43:13.840403   46101 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1204 20:43:13.840409   46101 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1204 20:43:13.840416   46101 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1204 20:43:13.840419   46101 command_runner.go:130] > # reload'.
	I1204 20:43:13.840425   46101 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1204 20:43:13.840432   46101 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1204 20:43:13.840447   46101 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1204 20:43:13.840455   46101 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1204 20:43:13.840461   46101 command_runner.go:130] > [crio]
	I1204 20:43:13.840470   46101 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1204 20:43:13.840479   46101 command_runner.go:130] > # containers images, in this directory.
	I1204 20:43:13.840487   46101 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1204 20:43:13.840512   46101 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1204 20:43:13.840598   46101 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1204 20:43:13.840622   46101 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1204 20:43:13.840830   46101 command_runner.go:130] > # imagestore = ""
	I1204 20:43:13.840848   46101 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1204 20:43:13.840857   46101 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1204 20:43:13.840985   46101 command_runner.go:130] > storage_driver = "overlay"
	I1204 20:43:13.841009   46101 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1204 20:43:13.841020   46101 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1204 20:43:13.841028   46101 command_runner.go:130] > storage_option = [
	I1204 20:43:13.841146   46101 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1204 20:43:13.841191   46101 command_runner.go:130] > ]
	I1204 20:43:13.841207   46101 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1204 20:43:13.841235   46101 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1204 20:43:13.841576   46101 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1204 20:43:13.841592   46101 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1204 20:43:13.841602   46101 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1204 20:43:13.841609   46101 command_runner.go:130] > # always happen on a node reboot
	I1204 20:43:13.841989   46101 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1204 20:43:13.842014   46101 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1204 20:43:13.842024   46101 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1204 20:43:13.842031   46101 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1204 20:43:13.842113   46101 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1204 20:43:13.842124   46101 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1204 20:43:13.842135   46101 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1204 20:43:13.842334   46101 command_runner.go:130] > # internal_wipe = true
	I1204 20:43:13.842346   46101 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1204 20:43:13.842351   46101 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1204 20:43:13.842602   46101 command_runner.go:130] > # internal_repair = false
	I1204 20:43:13.842612   46101 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1204 20:43:13.842618   46101 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1204 20:43:13.842623   46101 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1204 20:43:13.842912   46101 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1204 20:43:13.842922   46101 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1204 20:43:13.842926   46101 command_runner.go:130] > [crio.api]
	I1204 20:43:13.842931   46101 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1204 20:43:13.843140   46101 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1204 20:43:13.843150   46101 command_runner.go:130] > # IP address on which the stream server will listen.
	I1204 20:43:13.843400   46101 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1204 20:43:13.843418   46101 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1204 20:43:13.843426   46101 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1204 20:43:13.843655   46101 command_runner.go:130] > # stream_port = "0"
	I1204 20:43:13.843665   46101 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1204 20:43:13.843970   46101 command_runner.go:130] > # stream_enable_tls = false
	I1204 20:43:13.843980   46101 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1204 20:43:13.844169   46101 command_runner.go:130] > # stream_idle_timeout = ""
	I1204 20:43:13.844191   46101 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1204 20:43:13.844201   46101 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1204 20:43:13.844207   46101 command_runner.go:130] > # minutes.
	I1204 20:43:13.844376   46101 command_runner.go:130] > # stream_tls_cert = ""
	I1204 20:43:13.844394   46101 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1204 20:43:13.844400   46101 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1204 20:43:13.844569   46101 command_runner.go:130] > # stream_tls_key = ""
	I1204 20:43:13.844588   46101 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1204 20:43:13.844598   46101 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1204 20:43:13.844615   46101 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1204 20:43:13.845014   46101 command_runner.go:130] > # stream_tls_ca = ""
	I1204 20:43:13.845038   46101 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1204 20:43:13.845047   46101 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1204 20:43:13.845057   46101 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1204 20:43:13.845065   46101 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1204 20:43:13.845075   46101 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1204 20:43:13.845085   46101 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1204 20:43:13.845094   46101 command_runner.go:130] > [crio.runtime]
	I1204 20:43:13.845104   46101 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1204 20:43:13.845116   46101 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1204 20:43:13.845123   46101 command_runner.go:130] > # "nofile=1024:2048"
	I1204 20:43:13.845134   46101 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1204 20:43:13.845144   46101 command_runner.go:130] > # default_ulimits = [
	I1204 20:43:13.845148   46101 command_runner.go:130] > # ]
	I1204 20:43:13.845156   46101 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1204 20:43:13.845161   46101 command_runner.go:130] > # no_pivot = false
	I1204 20:43:13.845169   46101 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1204 20:43:13.845179   46101 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1204 20:43:13.845187   46101 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1204 20:43:13.845200   46101 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1204 20:43:13.845215   46101 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1204 20:43:13.845229   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1204 20:43:13.845240   46101 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1204 20:43:13.845252   46101 command_runner.go:130] > # Cgroup setting for conmon
	I1204 20:43:13.845269   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1204 20:43:13.845279   46101 command_runner.go:130] > conmon_cgroup = "pod"
	I1204 20:43:13.845290   46101 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1204 20:43:13.845304   46101 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1204 20:43:13.845317   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1204 20:43:13.845328   46101 command_runner.go:130] > conmon_env = [
	I1204 20:43:13.845337   46101 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1204 20:43:13.845355   46101 command_runner.go:130] > ]
	I1204 20:43:13.845367   46101 command_runner.go:130] > # Additional environment variables to set for all the
	I1204 20:43:13.845379   46101 command_runner.go:130] > # containers. These are overridden if set in the
	I1204 20:43:13.845391   46101 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1204 20:43:13.845399   46101 command_runner.go:130] > # default_env = [
	I1204 20:43:13.845416   46101 command_runner.go:130] > # ]
	I1204 20:43:13.845426   46101 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1204 20:43:13.845440   46101 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1204 20:43:13.845450   46101 command_runner.go:130] > # selinux = false
	I1204 20:43:13.845461   46101 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1204 20:43:13.845474   46101 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1204 20:43:13.845487   46101 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1204 20:43:13.845497   46101 command_runner.go:130] > # seccomp_profile = ""
	I1204 20:43:13.845506   46101 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1204 20:43:13.845519   46101 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1204 20:43:13.845532   46101 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1204 20:43:13.845543   46101 command_runner.go:130] > # which might increase security.
	I1204 20:43:13.845552   46101 command_runner.go:130] > # This option is currently deprecated,
	I1204 20:43:13.845564   46101 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1204 20:43:13.845580   46101 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1204 20:43:13.845593   46101 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1204 20:43:13.845604   46101 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1204 20:43:13.845619   46101 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1204 20:43:13.845632   46101 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1204 20:43:13.845644   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.845655   46101 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1204 20:43:13.845668   46101 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1204 20:43:13.845679   46101 command_runner.go:130] > # the cgroup blockio controller.
	I1204 20:43:13.845689   46101 command_runner.go:130] > # blockio_config_file = ""
	I1204 20:43:13.845700   46101 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1204 20:43:13.845710   46101 command_runner.go:130] > # blockio parameters.
	I1204 20:43:13.845717   46101 command_runner.go:130] > # blockio_reload = false
	I1204 20:43:13.845732   46101 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1204 20:43:13.845741   46101 command_runner.go:130] > # irqbalance daemon.
	I1204 20:43:13.845750   46101 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1204 20:43:13.845763   46101 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1204 20:43:13.845774   46101 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1204 20:43:13.845787   46101 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1204 20:43:13.845804   46101 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1204 20:43:13.845818   46101 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1204 20:43:13.845829   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.845840   46101 command_runner.go:130] > # rdt_config_file = ""
	I1204 20:43:13.845852   46101 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1204 20:43:13.845862   46101 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1204 20:43:13.845888   46101 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1204 20:43:13.845898   46101 command_runner.go:130] > # separate_pull_cgroup = ""
	I1204 20:43:13.845909   46101 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1204 20:43:13.845922   46101 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1204 20:43:13.845932   46101 command_runner.go:130] > # will be added.
	I1204 20:43:13.845938   46101 command_runner.go:130] > # default_capabilities = [
	I1204 20:43:13.845952   46101 command_runner.go:130] > # 	"CHOWN",
	I1204 20:43:13.845961   46101 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1204 20:43:13.845967   46101 command_runner.go:130] > # 	"FSETID",
	I1204 20:43:13.845974   46101 command_runner.go:130] > # 	"FOWNER",
	I1204 20:43:13.845983   46101 command_runner.go:130] > # 	"SETGID",
	I1204 20:43:13.845991   46101 command_runner.go:130] > # 	"SETUID",
	I1204 20:43:13.846001   46101 command_runner.go:130] > # 	"SETPCAP",
	I1204 20:43:13.846008   46101 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1204 20:43:13.846017   46101 command_runner.go:130] > # 	"KILL",
	I1204 20:43:13.846022   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846038   46101 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1204 20:43:13.846054   46101 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1204 20:43:13.846066   46101 command_runner.go:130] > # add_inheritable_capabilities = false
	I1204 20:43:13.846082   46101 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1204 20:43:13.846096   46101 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1204 20:43:13.846102   46101 command_runner.go:130] > default_sysctls = [
	I1204 20:43:13.846114   46101 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1204 20:43:13.846121   46101 command_runner.go:130] > ]
	I1204 20:43:13.846129   46101 command_runner.go:130] > # List of devices on the host that a
	I1204 20:43:13.846143   46101 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1204 20:43:13.846150   46101 command_runner.go:130] > # allowed_devices = [
	I1204 20:43:13.846160   46101 command_runner.go:130] > # 	"/dev/fuse",
	I1204 20:43:13.846165   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846173   46101 command_runner.go:130] > # List of additional devices. specified as
	I1204 20:43:13.846188   46101 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1204 20:43:13.846200   46101 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1204 20:43:13.846209   46101 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1204 20:43:13.846220   46101 command_runner.go:130] > # additional_devices = [
	I1204 20:43:13.846225   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846237   46101 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1204 20:43:13.846256   46101 command_runner.go:130] > # cdi_spec_dirs = [
	I1204 20:43:13.846266   46101 command_runner.go:130] > # 	"/etc/cdi",
	I1204 20:43:13.846273   46101 command_runner.go:130] > # 	"/var/run/cdi",
	I1204 20:43:13.846282   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846292   46101 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1204 20:43:13.846305   46101 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1204 20:43:13.846311   46101 command_runner.go:130] > # Defaults to false.
	I1204 20:43:13.846323   46101 command_runner.go:130] > # device_ownership_from_security_context = false
	I1204 20:43:13.846336   46101 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1204 20:43:13.846349   46101 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1204 20:43:13.846359   46101 command_runner.go:130] > # hooks_dir = [
	I1204 20:43:13.846366   46101 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1204 20:43:13.846375   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846385   46101 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1204 20:43:13.846398   46101 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1204 20:43:13.846413   46101 command_runner.go:130] > # its default mounts from the following two files:
	I1204 20:43:13.846418   46101 command_runner.go:130] > #
	I1204 20:43:13.846433   46101 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1204 20:43:13.846446   46101 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1204 20:43:13.846459   46101 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1204 20:43:13.846467   46101 command_runner.go:130] > #
	I1204 20:43:13.846479   46101 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1204 20:43:13.846493   46101 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1204 20:43:13.846507   46101 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1204 20:43:13.846519   46101 command_runner.go:130] > #      only add mounts it finds in this file.
	I1204 20:43:13.846527   46101 command_runner.go:130] > #
	I1204 20:43:13.846535   46101 command_runner.go:130] > # default_mounts_file = ""
	I1204 20:43:13.846546   46101 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1204 20:43:13.846560   46101 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1204 20:43:13.846571   46101 command_runner.go:130] > pids_limit = 1024
	I1204 20:43:13.846582   46101 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1204 20:43:13.846595   46101 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1204 20:43:13.846609   46101 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1204 20:43:13.846626   46101 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1204 20:43:13.846636   46101 command_runner.go:130] > # log_size_max = -1
	I1204 20:43:13.846647   46101 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1204 20:43:13.846658   46101 command_runner.go:130] > # log_to_journald = false
	I1204 20:43:13.846668   46101 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1204 20:43:13.846679   46101 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1204 20:43:13.846695   46101 command_runner.go:130] > # Path to directory for container attach sockets.
	I1204 20:43:13.846708   46101 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1204 20:43:13.846719   46101 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1204 20:43:13.846730   46101 command_runner.go:130] > # bind_mount_prefix = ""
	I1204 20:43:13.846742   46101 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1204 20:43:13.846752   46101 command_runner.go:130] > # read_only = false
	I1204 20:43:13.846765   46101 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1204 20:43:13.846779   46101 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1204 20:43:13.846789   46101 command_runner.go:130] > # live configuration reload.
	I1204 20:43:13.846795   46101 command_runner.go:130] > # log_level = "info"
	I1204 20:43:13.846807   46101 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1204 20:43:13.846818   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.846828   46101 command_runner.go:130] > # log_filter = ""
	I1204 20:43:13.846839   46101 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1204 20:43:13.846854   46101 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1204 20:43:13.846864   46101 command_runner.go:130] > # separated by comma.
	I1204 20:43:13.846876   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.846886   46101 command_runner.go:130] > # uid_mappings = ""
	I1204 20:43:13.846896   46101 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1204 20:43:13.846908   46101 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1204 20:43:13.846917   46101 command_runner.go:130] > # separated by comma.
	I1204 20:43:13.846932   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.846942   46101 command_runner.go:130] > # gid_mappings = ""
	I1204 20:43:13.846952   46101 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1204 20:43:13.846966   46101 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1204 20:43:13.846983   46101 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1204 20:43:13.846999   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.847006   46101 command_runner.go:130] > # minimum_mappable_uid = -1
	I1204 20:43:13.847019   46101 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1204 20:43:13.847032   46101 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1204 20:43:13.847045   46101 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1204 20:43:13.847060   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.847070   46101 command_runner.go:130] > # minimum_mappable_gid = -1
	I1204 20:43:13.847080   46101 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1204 20:43:13.847092   46101 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1204 20:43:13.847103   46101 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1204 20:43:13.847129   46101 command_runner.go:130] > # ctr_stop_timeout = 30
	I1204 20:43:13.847142   46101 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1204 20:43:13.847151   46101 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1204 20:43:13.847162   46101 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1204 20:43:13.847172   46101 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1204 20:43:13.847181   46101 command_runner.go:130] > drop_infra_ctr = false
	I1204 20:43:13.847190   46101 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1204 20:43:13.847197   46101 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1204 20:43:13.847206   46101 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1204 20:43:13.847213   46101 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1204 20:43:13.847220   46101 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1204 20:43:13.847229   46101 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1204 20:43:13.847237   46101 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1204 20:43:13.847244   46101 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1204 20:43:13.847251   46101 command_runner.go:130] > # shared_cpuset = ""
	I1204 20:43:13.847256   46101 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1204 20:43:13.847264   46101 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1204 20:43:13.847267   46101 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1204 20:43:13.847274   46101 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1204 20:43:13.847281   46101 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1204 20:43:13.847286   46101 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1204 20:43:13.847294   46101 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1204 20:43:13.847300   46101 command_runner.go:130] > # enable_criu_support = false
	I1204 20:43:13.847305   46101 command_runner.go:130] > # Enable/disable the generation of the container,
	I1204 20:43:13.847313   46101 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1204 20:43:13.847318   46101 command_runner.go:130] > # enable_pod_events = false
	I1204 20:43:13.847326   46101 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1204 20:43:13.847334   46101 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1204 20:43:13.847339   46101 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1204 20:43:13.847346   46101 command_runner.go:130] > # default_runtime = "runc"
	I1204 20:43:13.847352   46101 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1204 20:43:13.847361   46101 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1204 20:43:13.847389   46101 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1204 20:43:13.847407   46101 command_runner.go:130] > # creation as a file is not desired either.
	I1204 20:43:13.847417   46101 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1204 20:43:13.847427   46101 command_runner.go:130] > # the hostname is being managed dynamically.
	I1204 20:43:13.847434   46101 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1204 20:43:13.847437   46101 command_runner.go:130] > # ]
	I1204 20:43:13.847444   46101 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1204 20:43:13.847452   46101 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1204 20:43:13.847461   46101 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1204 20:43:13.847468   46101 command_runner.go:130] > # Each entry in the table should follow the format:
	I1204 20:43:13.847471   46101 command_runner.go:130] > #
	I1204 20:43:13.847478   46101 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1204 20:43:13.847482   46101 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1204 20:43:13.847508   46101 command_runner.go:130] > # runtime_type = "oci"
	I1204 20:43:13.847515   46101 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1204 20:43:13.847520   46101 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1204 20:43:13.847526   46101 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1204 20:43:13.847531   46101 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1204 20:43:13.847537   46101 command_runner.go:130] > # monitor_env = []
	I1204 20:43:13.847542   46101 command_runner.go:130] > # privileged_without_host_devices = false
	I1204 20:43:13.847548   46101 command_runner.go:130] > # allowed_annotations = []
	I1204 20:43:13.847553   46101 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1204 20:43:13.847559   46101 command_runner.go:130] > # Where:
	I1204 20:43:13.847565   46101 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1204 20:43:13.847573   46101 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1204 20:43:13.847581   46101 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1204 20:43:13.847587   46101 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1204 20:43:13.847594   46101 command_runner.go:130] > #   in $PATH.
	I1204 20:43:13.847601   46101 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1204 20:43:13.847608   46101 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1204 20:43:13.847614   46101 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1204 20:43:13.847620   46101 command_runner.go:130] > #   state.
	I1204 20:43:13.847628   46101 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1204 20:43:13.847636   46101 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1204 20:43:13.847645   46101 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1204 20:43:13.847652   46101 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1204 20:43:13.847658   46101 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1204 20:43:13.847666   46101 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1204 20:43:13.847673   46101 command_runner.go:130] > #   The currently recognized values are:
	I1204 20:43:13.847679   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1204 20:43:13.847688   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1204 20:43:13.847699   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1204 20:43:13.847707   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1204 20:43:13.847716   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1204 20:43:13.847725   46101 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1204 20:43:13.847734   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1204 20:43:13.847742   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1204 20:43:13.847748   46101 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1204 20:43:13.847756   46101 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1204 20:43:13.847766   46101 command_runner.go:130] > #   deprecated option "conmon".
	I1204 20:43:13.847781   46101 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1204 20:43:13.847792   46101 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1204 20:43:13.847805   46101 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1204 20:43:13.847816   46101 command_runner.go:130] > #   should be moved to the container's cgroup
	I1204 20:43:13.847831   46101 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1204 20:43:13.847842   46101 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1204 20:43:13.847856   46101 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1204 20:43:13.847867   46101 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1204 20:43:13.847875   46101 command_runner.go:130] > #
	I1204 20:43:13.847882   46101 command_runner.go:130] > # Using the seccomp notifier feature:
	I1204 20:43:13.847888   46101 command_runner.go:130] > #
	I1204 20:43:13.847895   46101 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1204 20:43:13.847904   46101 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1204 20:43:13.847909   46101 command_runner.go:130] > #
	I1204 20:43:13.847915   46101 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1204 20:43:13.847923   46101 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1204 20:43:13.847929   46101 command_runner.go:130] > #
	I1204 20:43:13.847937   46101 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1204 20:43:13.847943   46101 command_runner.go:130] > # feature.
	I1204 20:43:13.847946   46101 command_runner.go:130] > #
	I1204 20:43:13.847955   46101 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1204 20:43:13.847964   46101 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1204 20:43:13.847970   46101 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1204 20:43:13.847979   46101 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1204 20:43:13.847987   46101 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1204 20:43:13.847993   46101 command_runner.go:130] > #
	I1204 20:43:13.847999   46101 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1204 20:43:13.848010   46101 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1204 20:43:13.848017   46101 command_runner.go:130] > #
	I1204 20:43:13.848027   46101 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1204 20:43:13.848035   46101 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1204 20:43:13.848041   46101 command_runner.go:130] > #
	I1204 20:43:13.848049   46101 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1204 20:43:13.848061   46101 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1204 20:43:13.848070   46101 command_runner.go:130] > # limitation.
	I1204 20:43:13.848080   46101 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1204 20:43:13.848086   46101 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1204 20:43:13.848095   46101 command_runner.go:130] > runtime_type = "oci"
	I1204 20:43:13.848105   46101 command_runner.go:130] > runtime_root = "/run/runc"
	I1204 20:43:13.848113   46101 command_runner.go:130] > runtime_config_path = ""
	I1204 20:43:13.848120   46101 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1204 20:43:13.848124   46101 command_runner.go:130] > monitor_cgroup = "pod"
	I1204 20:43:13.848132   46101 command_runner.go:130] > monitor_exec_cgroup = ""
	I1204 20:43:13.848136   46101 command_runner.go:130] > monitor_env = [
	I1204 20:43:13.848142   46101 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1204 20:43:13.848145   46101 command_runner.go:130] > ]
	I1204 20:43:13.848153   46101 command_runner.go:130] > privileged_without_host_devices = false
	I1204 20:43:13.848160   46101 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1204 20:43:13.848168   46101 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1204 20:43:13.848175   46101 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1204 20:43:13.848182   46101 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1204 20:43:13.848194   46101 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1204 20:43:13.848201   46101 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1204 20:43:13.848212   46101 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1204 20:43:13.848222   46101 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1204 20:43:13.848230   46101 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1204 20:43:13.848236   46101 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1204 20:43:13.848240   46101 command_runner.go:130] > # Example:
	I1204 20:43:13.848249   46101 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1204 20:43:13.848254   46101 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1204 20:43:13.848258   46101 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1204 20:43:13.848263   46101 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1204 20:43:13.848267   46101 command_runner.go:130] > # cpuset = 0
	I1204 20:43:13.848270   46101 command_runner.go:130] > # cpushares = "0-1"
	I1204 20:43:13.848274   46101 command_runner.go:130] > # Where:
	I1204 20:43:13.848281   46101 command_runner.go:130] > # The workload name is workload-type.
	I1204 20:43:13.848288   46101 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1204 20:43:13.848293   46101 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1204 20:43:13.848298   46101 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1204 20:43:13.848305   46101 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1204 20:43:13.848310   46101 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1204 20:43:13.848314   46101 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1204 20:43:13.848321   46101 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1204 20:43:13.848327   46101 command_runner.go:130] > # Default value is set to true
	I1204 20:43:13.848332   46101 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1204 20:43:13.848337   46101 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1204 20:43:13.848343   46101 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1204 20:43:13.848347   46101 command_runner.go:130] > # Default value is set to 'false'
	I1204 20:43:13.848354   46101 command_runner.go:130] > # disable_hostport_mapping = false
	I1204 20:43:13.848360   46101 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1204 20:43:13.848365   46101 command_runner.go:130] > #
	I1204 20:43:13.848371   46101 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1204 20:43:13.848379   46101 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1204 20:43:13.848388   46101 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1204 20:43:13.848394   46101 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1204 20:43:13.848406   46101 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1204 20:43:13.848412   46101 command_runner.go:130] > [crio.image]
	I1204 20:43:13.848418   46101 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1204 20:43:13.848425   46101 command_runner.go:130] > # default_transport = "docker://"
	I1204 20:43:13.848431   46101 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1204 20:43:13.848439   46101 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1204 20:43:13.848445   46101 command_runner.go:130] > # global_auth_file = ""
	I1204 20:43:13.848450   46101 command_runner.go:130] > # The image used to instantiate infra containers.
	I1204 20:43:13.848457   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.848461   46101 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1204 20:43:13.848470   46101 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1204 20:43:13.848477   46101 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1204 20:43:13.848482   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.848489   46101 command_runner.go:130] > # pause_image_auth_file = ""
	I1204 20:43:13.848495   46101 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1204 20:43:13.848503   46101 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1204 20:43:13.848515   46101 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1204 20:43:13.848527   46101 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1204 20:43:13.848537   46101 command_runner.go:130] > # pause_command = "/pause"
	I1204 20:43:13.848548   46101 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1204 20:43:13.848560   46101 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1204 20:43:13.848571   46101 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1204 20:43:13.848583   46101 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1204 20:43:13.848595   46101 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1204 20:43:13.848607   46101 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1204 20:43:13.848617   46101 command_runner.go:130] > # pinned_images = [
	I1204 20:43:13.848625   46101 command_runner.go:130] > # ]
	I1204 20:43:13.848635   46101 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1204 20:43:13.848647   46101 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1204 20:43:13.848660   46101 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1204 20:43:13.848675   46101 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1204 20:43:13.848686   46101 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1204 20:43:13.848697   46101 command_runner.go:130] > # signature_policy = ""
	I1204 20:43:13.848705   46101 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1204 20:43:13.848723   46101 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1204 20:43:13.848738   46101 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1204 20:43:13.848751   46101 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1204 20:43:13.848760   46101 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1204 20:43:13.848768   46101 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1204 20:43:13.848781   46101 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1204 20:43:13.848794   46101 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1204 20:43:13.848804   46101 command_runner.go:130] > # changing them here.
	I1204 20:43:13.848812   46101 command_runner.go:130] > # insecure_registries = [
	I1204 20:43:13.848821   46101 command_runner.go:130] > # ]
	I1204 20:43:13.848831   46101 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1204 20:43:13.848842   46101 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1204 20:43:13.848852   46101 command_runner.go:130] > # image_volumes = "mkdir"
	I1204 20:43:13.848863   46101 command_runner.go:130] > # Temporary directory to use for storing big files
	I1204 20:43:13.848874   46101 command_runner.go:130] > # big_files_temporary_dir = ""
	I1204 20:43:13.848886   46101 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1204 20:43:13.848894   46101 command_runner.go:130] > # CNI plugins.
	I1204 20:43:13.848905   46101 command_runner.go:130] > [crio.network]
	I1204 20:43:13.848918   46101 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1204 20:43:13.848935   46101 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1204 20:43:13.848945   46101 command_runner.go:130] > # cni_default_network = ""
	I1204 20:43:13.848956   46101 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1204 20:43:13.848966   46101 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1204 20:43:13.848978   46101 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1204 20:43:13.848987   46101 command_runner.go:130] > # plugin_dirs = [
	I1204 20:43:13.848993   46101 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1204 20:43:13.848997   46101 command_runner.go:130] > # ]
	I1204 20:43:13.849005   46101 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1204 20:43:13.849009   46101 command_runner.go:130] > [crio.metrics]
	I1204 20:43:13.849016   46101 command_runner.go:130] > # Globally enable or disable metrics support.
	I1204 20:43:13.849022   46101 command_runner.go:130] > enable_metrics = true
	I1204 20:43:13.849027   46101 command_runner.go:130] > # Specify enabled metrics collectors.
	I1204 20:43:13.849032   46101 command_runner.go:130] > # Per default all metrics are enabled.
	I1204 20:43:13.849038   46101 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1204 20:43:13.849047   46101 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1204 20:43:13.849052   46101 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1204 20:43:13.849059   46101 command_runner.go:130] > # metrics_collectors = [
	I1204 20:43:13.849062   46101 command_runner.go:130] > # 	"operations",
	I1204 20:43:13.849067   46101 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1204 20:43:13.849074   46101 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1204 20:43:13.849080   46101 command_runner.go:130] > # 	"operations_errors",
	I1204 20:43:13.849087   46101 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1204 20:43:13.849091   46101 command_runner.go:130] > # 	"image_pulls_by_name",
	I1204 20:43:13.849097   46101 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1204 20:43:13.849102   46101 command_runner.go:130] > # 	"image_pulls_failures",
	I1204 20:43:13.849108   46101 command_runner.go:130] > # 	"image_pulls_successes",
	I1204 20:43:13.849112   46101 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1204 20:43:13.849118   46101 command_runner.go:130] > # 	"image_layer_reuse",
	I1204 20:43:13.849123   46101 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1204 20:43:13.849129   46101 command_runner.go:130] > # 	"containers_oom_total",
	I1204 20:43:13.849133   46101 command_runner.go:130] > # 	"containers_oom",
	I1204 20:43:13.849139   46101 command_runner.go:130] > # 	"processes_defunct",
	I1204 20:43:13.849143   46101 command_runner.go:130] > # 	"operations_total",
	I1204 20:43:13.849147   46101 command_runner.go:130] > # 	"operations_latency_seconds",
	I1204 20:43:13.849155   46101 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1204 20:43:13.849162   46101 command_runner.go:130] > # 	"operations_errors_total",
	I1204 20:43:13.849166   46101 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1204 20:43:13.849173   46101 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1204 20:43:13.849177   46101 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1204 20:43:13.849184   46101 command_runner.go:130] > # 	"image_pulls_success_total",
	I1204 20:43:13.849188   46101 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1204 20:43:13.849194   46101 command_runner.go:130] > # 	"containers_oom_count_total",
	I1204 20:43:13.849200   46101 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1204 20:43:13.849207   46101 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1204 20:43:13.849213   46101 command_runner.go:130] > # ]
	I1204 20:43:13.849220   46101 command_runner.go:130] > # The port on which the metrics server will listen.
	I1204 20:43:13.849224   46101 command_runner.go:130] > # metrics_port = 9090
	I1204 20:43:13.849231   46101 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1204 20:43:13.849235   46101 command_runner.go:130] > # metrics_socket = ""
	I1204 20:43:13.849242   46101 command_runner.go:130] > # The certificate for the secure metrics server.
	I1204 20:43:13.849247   46101 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1204 20:43:13.849255   46101 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1204 20:43:13.849261   46101 command_runner.go:130] > # certificate on any modification event.
	I1204 20:43:13.849265   46101 command_runner.go:130] > # metrics_cert = ""
	I1204 20:43:13.849272   46101 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1204 20:43:13.849277   46101 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1204 20:43:13.849282   46101 command_runner.go:130] > # metrics_key = ""
	I1204 20:43:13.849288   46101 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1204 20:43:13.849294   46101 command_runner.go:130] > [crio.tracing]
	I1204 20:43:13.849300   46101 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1204 20:43:13.849306   46101 command_runner.go:130] > # enable_tracing = false
	I1204 20:43:13.849312   46101 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1204 20:43:13.849319   46101 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1204 20:43:13.849325   46101 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1204 20:43:13.849334   46101 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1204 20:43:13.849340   46101 command_runner.go:130] > # CRI-O NRI configuration.
	I1204 20:43:13.849344   46101 command_runner.go:130] > [crio.nri]
	I1204 20:43:13.849350   46101 command_runner.go:130] > # Globally enable or disable NRI.
	I1204 20:43:13.849354   46101 command_runner.go:130] > # enable_nri = false
	I1204 20:43:13.849361   46101 command_runner.go:130] > # NRI socket to listen on.
	I1204 20:43:13.849365   46101 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1204 20:43:13.849372   46101 command_runner.go:130] > # NRI plugin directory to use.
	I1204 20:43:13.849377   46101 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1204 20:43:13.849384   46101 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1204 20:43:13.849388   46101 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1204 20:43:13.849395   46101 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1204 20:43:13.849407   46101 command_runner.go:130] > # nri_disable_connections = false
	I1204 20:43:13.849414   46101 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1204 20:43:13.849420   46101 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1204 20:43:13.849425   46101 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1204 20:43:13.849432   46101 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1204 20:43:13.849439   46101 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1204 20:43:13.849445   46101 command_runner.go:130] > [crio.stats]
	I1204 20:43:13.849450   46101 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1204 20:43:13.849458   46101 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1204 20:43:13.849462   46101 command_runner.go:130] > # stats_collection_period = 0
	I1204 20:43:13.849484   46101 command_runner.go:130] ! time="2024-12-04 20:43:13.798105321Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1204 20:43:13.849500   46101 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1204 20:43:13.849600   46101 cni.go:84] Creating CNI manager for ""
	I1204 20:43:13.849613   46101 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1204 20:43:13.849621   46101 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:43:13.849641   46101 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-980367 NodeName:multinode-980367 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:43:13.849751   46101 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-980367"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.127"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:43:13.849824   46101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:43:13.859635   46101 command_runner.go:130] > kubeadm
	I1204 20:43:13.859650   46101 command_runner.go:130] > kubectl
	I1204 20:43:13.859654   46101 command_runner.go:130] > kubelet
	I1204 20:43:13.859670   46101 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:43:13.859722   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 20:43:13.868835   46101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1204 20:43:13.885116   46101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:43:13.900825   46101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1204 20:43:13.916894   46101 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I1204 20:43:13.920665   46101 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I1204 20:43:13.920729   46101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:43:14.058246   46101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:43:14.073104   46101 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367 for IP: 192.168.39.127
	I1204 20:43:14.073132   46101 certs.go:194] generating shared ca certs ...
	I1204 20:43:14.073152   46101 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:43:14.073337   46101 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:43:14.073399   46101 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:43:14.073413   46101 certs.go:256] generating profile certs ...
	I1204 20:43:14.073507   46101 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/client.key
	I1204 20:43:14.073590   46101 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key.dd041cb4
	I1204 20:43:14.073647   46101 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key
	I1204 20:43:14.073660   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:43:14.073680   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:43:14.073700   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:43:14.073723   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:43:14.073742   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:43:14.073762   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:43:14.073782   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:43:14.073813   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:43:14.073882   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:43:14.073923   46101 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:43:14.073940   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:43:14.073974   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:43:14.074007   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:43:14.074039   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:43:14.074095   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:43:14.074134   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.074158   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.074184   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.074782   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:43:14.101173   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:43:14.132530   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:43:14.155556   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:43:14.178369   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:43:14.200837   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:43:14.223065   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:43:14.245554   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 20:43:14.266657   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:43:14.288083   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:43:14.310581   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:43:14.331475   46101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:43:14.346690   46101 ssh_runner.go:195] Run: openssl version
	I1204 20:43:14.352034   46101 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1204 20:43:14.352119   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:43:14.361484   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365420   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365443   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365473   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.370475   46101 command_runner.go:130] > 51391683
	I1204 20:43:14.370517   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:43:14.378666   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:43:14.388081   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.391950   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.392007   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.392044   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.396863   46101 command_runner.go:130] > 3ec20f2e
	I1204 20:43:14.397064   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:43:14.405264   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:43:14.414656   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418585   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418648   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418687   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.423871   46101 command_runner.go:130] > b5213941
	I1204 20:43:14.423923   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:43:14.432360   46101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:43:14.436314   46101 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:43:14.436330   46101 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1204 20:43:14.436336   46101 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1204 20:43:14.436342   46101 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1204 20:43:14.436351   46101 command_runner.go:130] > Access: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436355   46101 command_runner.go:130] > Modify: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436360   46101 command_runner.go:130] > Change: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436367   46101 command_runner.go:130] >  Birth: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436503   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 20:43:14.441876   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.441922   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 20:43:14.446947   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.447244   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 20:43:14.452238   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.452285   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 20:43:14.457190   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.457243   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 20:43:14.462221   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.462276   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 20:43:14.467275   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.467323   46101 kubeadm.go:392] StartCluster: {Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:43:14.467469   46101 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:43:14.467528   46101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:43:14.503140   46101 command_runner.go:130] > 077fccc1f632ca852f24db7b5953f09de1c43112bb437fa5bdd91ac2daa9bee0
	I1204 20:43:14.503166   46101 command_runner.go:130] > 5d654f9cdac10f47aeef6df7485cd8c7f1f6d5a8c76ccbe1e687dd980c39491d
	I1204 20:43:14.503177   46101 command_runner.go:130] > efa8b788446b5387a9837090d5a65fd1bc71871437e3db37b5bc2dd2d5922f87
	I1204 20:43:14.503193   46101 command_runner.go:130] > a82c6aaac37b0760734674325ad3191b0c69fafe3d652d39ecdec503e8f0dc99
	I1204 20:43:14.503204   46101 command_runner.go:130] > f14af9f1a148e7fd69cd047e47b95ff063322bec1bb9165e0e459475e160ed15
	I1204 20:43:14.503214   46101 command_runner.go:130] > 84f732075a321ebc600dd924ae96154ee708dbe3e7cdfc210086ac47b367cac4
	I1204 20:43:14.503224   46101 command_runner.go:130] > 711a96b7c814bede7e66aff6b57ea4b2aa827e45996ec59e5e9eae96fad83860
	I1204 20:43:14.503240   46101 command_runner.go:130] > 70c10cf60e07a7c5402ef2f6d04b1a921902d8c8b070391a06d3fc3c14ce1a69
	I1204 20:43:14.503250   46101 command_runner.go:130] > 74ed511efd3429f00bdb97c64fcbb18681ed16d20976ffd0ec07c5c9f0406611
	I1204 20:43:14.503274   46101 cri.go:89] found id: "077fccc1f632ca852f24db7b5953f09de1c43112bb437fa5bdd91ac2daa9bee0"
	I1204 20:43:14.503287   46101 cri.go:89] found id: "5d654f9cdac10f47aeef6df7485cd8c7f1f6d5a8c76ccbe1e687dd980c39491d"
	I1204 20:43:14.503295   46101 cri.go:89] found id: "efa8b788446b5387a9837090d5a65fd1bc71871437e3db37b5bc2dd2d5922f87"
	I1204 20:43:14.503301   46101 cri.go:89] found id: "a82c6aaac37b0760734674325ad3191b0c69fafe3d652d39ecdec503e8f0dc99"
	I1204 20:43:14.503308   46101 cri.go:89] found id: "f14af9f1a148e7fd69cd047e47b95ff063322bec1bb9165e0e459475e160ed15"
	I1204 20:43:14.503313   46101 cri.go:89] found id: "84f732075a321ebc600dd924ae96154ee708dbe3e7cdfc210086ac47b367cac4"
	I1204 20:43:14.503320   46101 cri.go:89] found id: "711a96b7c814bede7e66aff6b57ea4b2aa827e45996ec59e5e9eae96fad83860"
	I1204 20:43:14.503325   46101 cri.go:89] found id: "70c10cf60e07a7c5402ef2f6d04b1a921902d8c8b070391a06d3fc3c14ce1a69"
	I1204 20:43:14.503332   46101 cri.go:89] found id: "74ed511efd3429f00bdb97c64fcbb18681ed16d20976ffd0ec07c5c9f0406611"
	I1204 20:43:14.503341   46101 cri.go:89] found id: ""
	I1204 20:43:14.503405   46101 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-980367 -n multinode-980367
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-980367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 stop
E1204 20:45:29.343163   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-980367 stop: exit status 82 (2m0.455762928s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-980367-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-980367 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 status: (18.718208011s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr: (3.359416925s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-980367 -n multinode-980367
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 logs -n 25
E1204 20:47:26.275725   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 logs -n 25: (1.896935257s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367:/home/docker/cp-test_multinode-980367-m02_multinode-980367.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367 sudo cat                                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m02_multinode-980367.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03:/home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367-m03 sudo cat                                   | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp testdata/cp-test.txt                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367:/home/docker/cp-test_multinode-980367-m03_multinode-980367.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367 sudo cat                                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02:/home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367-m02 sudo cat                                   | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-980367 node stop m03                                                          | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	| node    | multinode-980367 node start                                                             | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| stop    | -p multinode-980367                                                                     | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| start   | -p multinode-980367                                                                     | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:41 UTC | 04 Dec 24 20:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:44 UTC |                     |
	| node    | multinode-980367 node delete                                                            | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:45 UTC | 04 Dec 24 20:45 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-980367 stop                                                                   | multinode-980367 | jenkins | v1.34.0 | 04 Dec 24 20:45 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:41:30
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:41:30.364118   46101 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:41:30.364252   46101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:41:30.364263   46101 out.go:358] Setting ErrFile to fd 2...
	I1204 20:41:30.364269   46101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:41:30.364467   46101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:41:30.364971   46101 out.go:352] Setting JSON to false
	I1204 20:41:30.365852   46101 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5040,"bootTime":1733339850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:41:30.365948   46101 start.go:139] virtualization: kvm guest
	I1204 20:41:30.368749   46101 out.go:177] * [multinode-980367] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:41:30.370398   46101 notify.go:220] Checking for updates...
	I1204 20:41:30.370408   46101 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:41:30.371620   46101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:41:30.373289   46101 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:41:30.374932   46101 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:41:30.376013   46101 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:41:30.377128   46101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:41:30.378536   46101 config.go:182] Loaded profile config "multinode-980367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:41:30.378618   46101 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:41:30.379037   46101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:41:30.379087   46101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:41:30.393622   46101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1204 20:41:30.394012   46101 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:41:30.394565   46101 main.go:141] libmachine: Using API Version  1
	I1204 20:41:30.394592   46101 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:41:30.394926   46101 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:41:30.395110   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.427656   46101 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:41:30.428999   46101 start.go:297] selected driver: kvm2
	I1204 20:41:30.429012   46101 start.go:901] validating driver "kvm2" against &{Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:41:30.429138   46101 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:41:30.429437   46101 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:41:30.429504   46101 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:41:30.443264   46101 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:41:30.443928   46101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:41:30.443959   46101 cni.go:84] Creating CNI manager for ""
	I1204 20:41:30.444020   46101 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1204 20:41:30.444084   46101 start.go:340] cluster config:
	{Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:41:30.444228   46101 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:41:30.445828   46101 out.go:177] * Starting "multinode-980367" primary control-plane node in "multinode-980367" cluster
	I1204 20:41:30.447126   46101 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:41:30.447152   46101 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 20:41:30.447158   46101 cache.go:56] Caching tarball of preloaded images
	I1204 20:41:30.447254   46101 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:41:30.447269   46101 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 20:41:30.447366   46101 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/config.json ...
	I1204 20:41:30.447578   46101 start.go:360] acquireMachinesLock for multinode-980367: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:41:30.447623   46101 start.go:364] duration metric: took 26.612µs to acquireMachinesLock for "multinode-980367"
	I1204 20:41:30.447642   46101 start.go:96] Skipping create...Using existing machine configuration
	I1204 20:41:30.447650   46101 fix.go:54] fixHost starting: 
	I1204 20:41:30.447890   46101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:41:30.447924   46101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:41:30.461060   46101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I1204 20:41:30.461441   46101 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:41:30.461956   46101 main.go:141] libmachine: Using API Version  1
	I1204 20:41:30.461975   46101 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:41:30.462242   46101 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:41:30.462414   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.462531   46101 main.go:141] libmachine: (multinode-980367) Calling .GetState
	I1204 20:41:30.463995   46101 fix.go:112] recreateIfNeeded on multinode-980367: state=Running err=<nil>
	W1204 20:41:30.464023   46101 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 20:41:30.465673   46101 out.go:177] * Updating the running kvm2 "multinode-980367" VM ...
	I1204 20:41:30.466720   46101 machine.go:93] provisionDockerMachine start ...
	I1204 20:41:30.466738   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:41:30.466885   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.469230   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.469633   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.469660   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.469797   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.469934   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.470078   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.470170   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.470322   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.470531   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.470546   46101 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 20:41:30.575982   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-980367
	
	I1204 20:41:30.576019   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.576195   46101 buildroot.go:166] provisioning hostname "multinode-980367"
	I1204 20:41:30.576217   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.576403   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.578926   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.579285   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.579304   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.579447   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.579597   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.579728   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.579845   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.579979   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.580126   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.580138   46101 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-980367 && echo "multinode-980367" | sudo tee /etc/hostname
	I1204 20:41:30.693343   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-980367
	
	I1204 20:41:30.693376   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.695982   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.696302   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.696323   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.696510   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:30.696668   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.696808   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:30.696915   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:30.697029   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:30.697174   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:30.697189   46101 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-980367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-980367/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-980367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:41:30.795808   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:41:30.795853   46101 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:41:30.795886   46101 buildroot.go:174] setting up certificates
	I1204 20:41:30.795898   46101 provision.go:84] configureAuth start
	I1204 20:41:30.795915   46101 main.go:141] libmachine: (multinode-980367) Calling .GetMachineName
	I1204 20:41:30.796194   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:41:30.798764   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.799213   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.799262   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.799357   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:30.801686   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.802066   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:30.802096   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:30.802218   46101 provision.go:143] copyHostCerts
	I1204 20:41:30.802246   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:41:30.802280   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:41:30.802289   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:41:30.802355   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:41:30.802441   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:41:30.802466   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:41:30.802473   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:41:30.802497   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:41:30.802552   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:41:30.802569   46101 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:41:30.802574   46101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:41:30.802599   46101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:41:30.802700   46101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.multinode-980367 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-980367]
	I1204 20:41:31.020771   46101 provision.go:177] copyRemoteCerts
	I1204 20:41:31.020851   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:41:31.020876   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:31.023479   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.023822   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:31.023844   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.024050   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:31.024224   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.024373   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:31.024493   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:41:31.101133   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 20:41:31.101209   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:41:31.125886   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 20:41:31.125952   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1204 20:41:31.149646   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 20:41:31.149730   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:41:31.174550   46101 provision.go:87] duration metric: took 378.635665ms to configureAuth
	I1204 20:41:31.174583   46101 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:41:31.174837   46101 config.go:182] Loaded profile config "multinode-980367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:41:31.174923   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:41:31.177288   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.177660   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:41:31.177710   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:41:31.177877   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:41:31.178056   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.178174   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:41:31.178328   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:41:31.178472   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:41:31.178628   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:41:31.178642   46101 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:43:01.926895   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:43:01.926929   46101 machine.go:96] duration metric: took 1m31.460195118s to provisionDockerMachine
	I1204 20:43:01.926942   46101 start.go:293] postStartSetup for "multinode-980367" (driver="kvm2")
	I1204 20:43:01.926953   46101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:43:01.926986   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:01.927328   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:43:01.927364   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:01.930522   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:01.931000   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:01.931033   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:01.931237   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:01.931421   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:01.931586   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:01.931716   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.010441   46101 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:43:02.014340   46101 command_runner.go:130] > NAME=Buildroot
	I1204 20:43:02.014395   46101 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1204 20:43:02.014404   46101 command_runner.go:130] > ID=buildroot
	I1204 20:43:02.014412   46101 command_runner.go:130] > VERSION_ID=2023.02.9
	I1204 20:43:02.014419   46101 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1204 20:43:02.014472   46101 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:43:02.014495   46101 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:43:02.014566   46101 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:43:02.014647   46101 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:43:02.014658   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /etc/ssl/certs/177432.pem
	I1204 20:43:02.014743   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:43:02.024687   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:43:02.047962   46101 start.go:296] duration metric: took 121.006853ms for postStartSetup
	I1204 20:43:02.048001   46101 fix.go:56] duration metric: took 1m31.600351476s for fixHost
	I1204 20:43:02.048021   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.050883   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.051276   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.051304   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.051492   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.051682   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.051862   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.052051   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.052231   46101 main.go:141] libmachine: Using SSH client type: native
	I1204 20:43:02.052435   46101 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1204 20:43:02.052446   46101 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:43:02.148010   46101 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733344982.113341650
	
	I1204 20:43:02.148032   46101 fix.go:216] guest clock: 1733344982.113341650
	I1204 20:43:02.148042   46101 fix.go:229] Guest: 2024-12-04 20:43:02.11334165 +0000 UTC Remote: 2024-12-04 20:43:02.048005204 +0000 UTC m=+91.718994206 (delta=65.336446ms)
	I1204 20:43:02.148092   46101 fix.go:200] guest clock delta is within tolerance: 65.336446ms
	I1204 20:43:02.148098   46101 start.go:83] releasing machines lock for "multinode-980367", held for 1m31.700463454s
	I1204 20:43:02.148120   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.148366   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:43:02.150891   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.151324   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.151344   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.151556   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152122   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152302   46101 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:43:02.152383   46101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:43:02.152439   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.152521   46101 ssh_runner.go:195] Run: cat /version.json
	I1204 20:43:02.152550   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:43:02.155141   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155396   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155569   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.155593   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.155723   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.155887   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:02.155904   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.155923   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:02.156097   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:43:02.156097   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.156270   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:43:02.156271   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.156434   46101 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:43:02.156552   46101 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:43:02.236444   46101 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1204 20:43:02.236864   46101 ssh_runner.go:195] Run: systemctl --version
	I1204 20:43:02.262624   46101 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1204 20:43:02.263386   46101 command_runner.go:130] > systemd 252 (252)
	I1204 20:43:02.263429   46101 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1204 20:43:02.263492   46101 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:43:02.432108   46101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 20:43:02.443291   46101 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1204 20:43:02.443567   46101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:43:02.443651   46101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:43:02.454403   46101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 20:43:02.454426   46101 start.go:495] detecting cgroup driver to use...
	I1204 20:43:02.454492   46101 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:43:02.472476   46101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:43:02.487488   46101 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:43:02.487551   46101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:43:02.501596   46101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:43:02.516133   46101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:43:02.691095   46101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:43:02.843474   46101 docker.go:233] disabling docker service ...
	I1204 20:43:02.843542   46101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:43:02.860646   46101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:43:02.873907   46101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:43:03.015535   46101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:43:03.184744   46101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:43:03.211147   46101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:43:03.237439   46101 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1204 20:43:03.237498   46101 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 20:43:03.237582   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.257335   46101 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:43:03.257402   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.288865   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.302989   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.323913   46101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:43:03.336568   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.350483   46101 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.363200   46101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:43:03.375080   46101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:43:03.384338   46101 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1204 20:43:03.384585   46101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:43:03.393441   46101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:43:03.542407   46101 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:43:13.607542   46101 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.065091128s)
	I1204 20:43:13.607572   46101 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:43:13.607617   46101 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:43:13.612772   46101 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1204 20:43:13.612801   46101 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1204 20:43:13.612810   46101 command_runner.go:130] > Device: 0,22	Inode: 1337        Links: 1
	I1204 20:43:13.612820   46101 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1204 20:43:13.612828   46101 command_runner.go:130] > Access: 2024-12-04 20:43:13.482201422 +0000
	I1204 20:43:13.612850   46101 command_runner.go:130] > Modify: 2024-12-04 20:43:13.440198115 +0000
	I1204 20:43:13.612858   46101 command_runner.go:130] > Change: 2024-12-04 20:43:13.440198115 +0000
	I1204 20:43:13.612866   46101 command_runner.go:130] >  Birth: -
	I1204 20:43:13.612899   46101 start.go:563] Will wait 60s for crictl version
	I1204 20:43:13.612948   46101 ssh_runner.go:195] Run: which crictl
	I1204 20:43:13.616561   46101 command_runner.go:130] > /usr/bin/crictl
	I1204 20:43:13.616633   46101 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:43:13.657567   46101 command_runner.go:130] > Version:  0.1.0
	I1204 20:43:13.657596   46101 command_runner.go:130] > RuntimeName:  cri-o
	I1204 20:43:13.657603   46101 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1204 20:43:13.657611   46101 command_runner.go:130] > RuntimeApiVersion:  v1
	I1204 20:43:13.657665   46101 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:43:13.657748   46101 ssh_runner.go:195] Run: crio --version
	I1204 20:43:13.685984   46101 command_runner.go:130] > crio version 1.29.1
	I1204 20:43:13.686016   46101 command_runner.go:130] > Version:        1.29.1
	I1204 20:43:13.686025   46101 command_runner.go:130] > GitCommit:      unknown
	I1204 20:43:13.686030   46101 command_runner.go:130] > GitCommitDate:  unknown
	I1204 20:43:13.686036   46101 command_runner.go:130] > GitTreeState:   clean
	I1204 20:43:13.686044   46101 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1204 20:43:13.686050   46101 command_runner.go:130] > GoVersion:      go1.21.6
	I1204 20:43:13.686055   46101 command_runner.go:130] > Compiler:       gc
	I1204 20:43:13.686062   46101 command_runner.go:130] > Platform:       linux/amd64
	I1204 20:43:13.686068   46101 command_runner.go:130] > Linkmode:       dynamic
	I1204 20:43:13.686075   46101 command_runner.go:130] > BuildTags:      
	I1204 20:43:13.686081   46101 command_runner.go:130] >   containers_image_ostree_stub
	I1204 20:43:13.686088   46101 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1204 20:43:13.686095   46101 command_runner.go:130] >   btrfs_noversion
	I1204 20:43:13.686108   46101 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1204 20:43:13.686117   46101 command_runner.go:130] >   libdm_no_deferred_remove
	I1204 20:43:13.686123   46101 command_runner.go:130] >   seccomp
	I1204 20:43:13.686133   46101 command_runner.go:130] > LDFlags:          unknown
	I1204 20:43:13.686137   46101 command_runner.go:130] > SeccompEnabled:   true
	I1204 20:43:13.686145   46101 command_runner.go:130] > AppArmorEnabled:  false
	I1204 20:43:13.686222   46101 ssh_runner.go:195] Run: crio --version
	I1204 20:43:13.713718   46101 command_runner.go:130] > crio version 1.29.1
	I1204 20:43:13.713765   46101 command_runner.go:130] > Version:        1.29.1
	I1204 20:43:13.713773   46101 command_runner.go:130] > GitCommit:      unknown
	I1204 20:43:13.713778   46101 command_runner.go:130] > GitCommitDate:  unknown
	I1204 20:43:13.713782   46101 command_runner.go:130] > GitTreeState:   clean
	I1204 20:43:13.713787   46101 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1204 20:43:13.713791   46101 command_runner.go:130] > GoVersion:      go1.21.6
	I1204 20:43:13.713795   46101 command_runner.go:130] > Compiler:       gc
	I1204 20:43:13.713802   46101 command_runner.go:130] > Platform:       linux/amd64
	I1204 20:43:13.713809   46101 command_runner.go:130] > Linkmode:       dynamic
	I1204 20:43:13.713815   46101 command_runner.go:130] > BuildTags:      
	I1204 20:43:13.713836   46101 command_runner.go:130] >   containers_image_ostree_stub
	I1204 20:43:13.713844   46101 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1204 20:43:13.713851   46101 command_runner.go:130] >   btrfs_noversion
	I1204 20:43:13.713861   46101 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1204 20:43:13.713867   46101 command_runner.go:130] >   libdm_no_deferred_remove
	I1204 20:43:13.713871   46101 command_runner.go:130] >   seccomp
	I1204 20:43:13.713875   46101 command_runner.go:130] > LDFlags:          unknown
	I1204 20:43:13.713880   46101 command_runner.go:130] > SeccompEnabled:   true
	I1204 20:43:13.713889   46101 command_runner.go:130] > AppArmorEnabled:  false
	I1204 20:43:13.716491   46101 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 20:43:13.717729   46101 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:43:13.720764   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:13.721182   46101 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:43:13.721208   46101 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:43:13.721396   46101 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:43:13.725398   46101 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1204 20:43:13.725489   46101 kubeadm.go:883] updating cluster {Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:43:13.725613   46101 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 20:43:13.725662   46101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:43:13.765695   46101 command_runner.go:130] > {
	I1204 20:43:13.765723   46101 command_runner.go:130] >   "images": [
	I1204 20:43:13.765727   46101 command_runner.go:130] >     {
	I1204 20:43:13.765755   46101 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1204 20:43:13.765762   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765770   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1204 20:43:13.765773   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765777   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765786   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1204 20:43:13.765793   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1204 20:43:13.765797   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765801   46101 command_runner.go:130] >       "size": "94965812",
	I1204 20:43:13.765805   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765810   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765817   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765824   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765828   46101 command_runner.go:130] >     },
	I1204 20:43:13.765831   46101 command_runner.go:130] >     {
	I1204 20:43:13.765837   46101 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1204 20:43:13.765842   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765847   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1204 20:43:13.765851   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765858   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765864   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1204 20:43:13.765871   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1204 20:43:13.765875   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765880   46101 command_runner.go:130] >       "size": "94958644",
	I1204 20:43:13.765885   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765891   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765896   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765900   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765903   46101 command_runner.go:130] >     },
	I1204 20:43:13.765907   46101 command_runner.go:130] >     {
	I1204 20:43:13.765913   46101 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1204 20:43:13.765920   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765924   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1204 20:43:13.765928   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765932   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.765939   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1204 20:43:13.765946   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1204 20:43:13.765950   46101 command_runner.go:130] >       ],
	I1204 20:43:13.765954   46101 command_runner.go:130] >       "size": "1363676",
	I1204 20:43:13.765958   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.765962   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.765966   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.765971   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.765974   46101 command_runner.go:130] >     },
	I1204 20:43:13.765978   46101 command_runner.go:130] >     {
	I1204 20:43:13.765986   46101 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1204 20:43:13.765990   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.765995   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1204 20:43:13.766001   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766005   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766012   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1204 20:43:13.766025   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1204 20:43:13.766030   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766035   46101 command_runner.go:130] >       "size": "31470524",
	I1204 20:43:13.766046   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766054   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766059   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766065   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766069   46101 command_runner.go:130] >     },
	I1204 20:43:13.766072   46101 command_runner.go:130] >     {
	I1204 20:43:13.766079   46101 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1204 20:43:13.766086   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766091   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1204 20:43:13.766097   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766101   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766108   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1204 20:43:13.766117   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1204 20:43:13.766122   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766126   46101 command_runner.go:130] >       "size": "63273227",
	I1204 20:43:13.766131   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766137   46101 command_runner.go:130] >       "username": "nonroot",
	I1204 20:43:13.766161   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766165   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766171   46101 command_runner.go:130] >     },
	I1204 20:43:13.766174   46101 command_runner.go:130] >     {
	I1204 20:43:13.766180   46101 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1204 20:43:13.766184   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766189   46101 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1204 20:43:13.766192   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766196   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766205   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1204 20:43:13.766212   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1204 20:43:13.766218   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766222   46101 command_runner.go:130] >       "size": "149009664",
	I1204 20:43:13.766226   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766230   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766234   46101 command_runner.go:130] >       },
	I1204 20:43:13.766238   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766242   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766247   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766253   46101 command_runner.go:130] >     },
	I1204 20:43:13.766258   46101 command_runner.go:130] >     {
	I1204 20:43:13.766266   46101 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1204 20:43:13.766271   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766276   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1204 20:43:13.766282   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766286   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766293   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1204 20:43:13.766302   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1204 20:43:13.766306   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766310   46101 command_runner.go:130] >       "size": "95274464",
	I1204 20:43:13.766316   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766319   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766323   46101 command_runner.go:130] >       },
	I1204 20:43:13.766329   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766333   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766339   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766342   46101 command_runner.go:130] >     },
	I1204 20:43:13.766347   46101 command_runner.go:130] >     {
	I1204 20:43:13.766353   46101 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1204 20:43:13.766359   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766364   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1204 20:43:13.766370   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766374   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766387   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1204 20:43:13.766402   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1204 20:43:13.766406   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766410   46101 command_runner.go:130] >       "size": "89474374",
	I1204 20:43:13.766413   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766417   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766421   46101 command_runner.go:130] >       },
	I1204 20:43:13.766425   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766429   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766433   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766436   46101 command_runner.go:130] >     },
	I1204 20:43:13.766439   46101 command_runner.go:130] >     {
	I1204 20:43:13.766445   46101 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1204 20:43:13.766449   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766453   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1204 20:43:13.766457   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766464   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766471   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1204 20:43:13.766478   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1204 20:43:13.766483   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766488   46101 command_runner.go:130] >       "size": "92783513",
	I1204 20:43:13.766492   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.766496   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766503   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766507   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766510   46101 command_runner.go:130] >     },
	I1204 20:43:13.766514   46101 command_runner.go:130] >     {
	I1204 20:43:13.766520   46101 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1204 20:43:13.766524   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766529   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1204 20:43:13.766535   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766539   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766547   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1204 20:43:13.766556   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1204 20:43:13.766560   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766564   46101 command_runner.go:130] >       "size": "68457798",
	I1204 20:43:13.766568   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766571   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.766575   46101 command_runner.go:130] >       },
	I1204 20:43:13.766579   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766583   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766587   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.766591   46101 command_runner.go:130] >     },
	I1204 20:43:13.766594   46101 command_runner.go:130] >     {
	I1204 20:43:13.766600   46101 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1204 20:43:13.766607   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.766611   46101 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1204 20:43:13.766614   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766618   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.766624   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1204 20:43:13.766634   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1204 20:43:13.766637   46101 command_runner.go:130] >       ],
	I1204 20:43:13.766641   46101 command_runner.go:130] >       "size": "742080",
	I1204 20:43:13.766644   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.766648   46101 command_runner.go:130] >         "value": "65535"
	I1204 20:43:13.766652   46101 command_runner.go:130] >       },
	I1204 20:43:13.766655   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.766659   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.766663   46101 command_runner.go:130] >       "pinned": true
	I1204 20:43:13.766668   46101 command_runner.go:130] >     }
	I1204 20:43:13.766671   46101 command_runner.go:130] >   ]
	I1204 20:43:13.766674   46101 command_runner.go:130] > }
	I1204 20:43:13.767562   46101 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:43:13.767582   46101 crio.go:433] Images already preloaded, skipping extraction
	I1204 20:43:13.767629   46101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:43:13.799639   46101 command_runner.go:130] > {
	I1204 20:43:13.799662   46101 command_runner.go:130] >   "images": [
	I1204 20:43:13.799668   46101 command_runner.go:130] >     {
	I1204 20:43:13.799675   46101 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1204 20:43:13.799682   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799694   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1204 20:43:13.799702   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799708   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799721   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1204 20:43:13.799731   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1204 20:43:13.799737   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799745   46101 command_runner.go:130] >       "size": "94965812",
	I1204 20:43:13.799751   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799758   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799773   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.799780   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.799786   46101 command_runner.go:130] >     },
	I1204 20:43:13.799792   46101 command_runner.go:130] >     {
	I1204 20:43:13.799802   46101 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1204 20:43:13.799809   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799820   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1204 20:43:13.799830   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799836   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799846   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1204 20:43:13.799861   46101 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1204 20:43:13.799867   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799874   46101 command_runner.go:130] >       "size": "94958644",
	I1204 20:43:13.799881   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799891   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799899   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.799907   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.799916   46101 command_runner.go:130] >     },
	I1204 20:43:13.799919   46101 command_runner.go:130] >     {
	I1204 20:43:13.799926   46101 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1204 20:43:13.799931   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.799937   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1204 20:43:13.799940   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799945   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.799965   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1204 20:43:13.799975   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1204 20:43:13.799978   46101 command_runner.go:130] >       ],
	I1204 20:43:13.799982   46101 command_runner.go:130] >       "size": "1363676",
	I1204 20:43:13.799986   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.799990   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.799997   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800003   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800006   46101 command_runner.go:130] >     },
	I1204 20:43:13.800012   46101 command_runner.go:130] >     {
	I1204 20:43:13.800018   46101 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1204 20:43:13.800025   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800030   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1204 20:43:13.800036   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800041   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800052   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1204 20:43:13.800066   46101 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1204 20:43:13.800072   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800077   46101 command_runner.go:130] >       "size": "31470524",
	I1204 20:43:13.800084   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800088   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800095   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800098   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800104   46101 command_runner.go:130] >     },
	I1204 20:43:13.800108   46101 command_runner.go:130] >     {
	I1204 20:43:13.800116   46101 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1204 20:43:13.800121   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800126   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1204 20:43:13.800132   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800135   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800145   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1204 20:43:13.800153   46101 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1204 20:43:13.800157   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800161   46101 command_runner.go:130] >       "size": "63273227",
	I1204 20:43:13.800164   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800168   46101 command_runner.go:130] >       "username": "nonroot",
	I1204 20:43:13.800172   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800179   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800183   46101 command_runner.go:130] >     },
	I1204 20:43:13.800190   46101 command_runner.go:130] >     {
	I1204 20:43:13.800196   46101 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1204 20:43:13.800202   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800207   46101 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1204 20:43:13.800213   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800218   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800227   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1204 20:43:13.800236   46101 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1204 20:43:13.800242   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800247   46101 command_runner.go:130] >       "size": "149009664",
	I1204 20:43:13.800254   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800258   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800264   46101 command_runner.go:130] >       },
	I1204 20:43:13.800270   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800274   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800280   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800284   46101 command_runner.go:130] >     },
	I1204 20:43:13.800292   46101 command_runner.go:130] >     {
	I1204 20:43:13.800298   46101 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1204 20:43:13.800304   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800309   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1204 20:43:13.800315   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800319   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800329   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1204 20:43:13.800338   46101 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1204 20:43:13.800344   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800349   46101 command_runner.go:130] >       "size": "95274464",
	I1204 20:43:13.800355   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800358   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800364   46101 command_runner.go:130] >       },
	I1204 20:43:13.800368   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800374   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800377   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800383   46101 command_runner.go:130] >     },
	I1204 20:43:13.800386   46101 command_runner.go:130] >     {
	I1204 20:43:13.800394   46101 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1204 20:43:13.800400   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800405   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1204 20:43:13.800410   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800414   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800430   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1204 20:43:13.800444   46101 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1204 20:43:13.800450   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800455   46101 command_runner.go:130] >       "size": "89474374",
	I1204 20:43:13.800462   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800465   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800471   46101 command_runner.go:130] >       },
	I1204 20:43:13.800476   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800482   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800486   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800492   46101 command_runner.go:130] >     },
	I1204 20:43:13.800495   46101 command_runner.go:130] >     {
	I1204 20:43:13.800503   46101 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1204 20:43:13.800507   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800514   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1204 20:43:13.800517   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800524   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800531   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1204 20:43:13.800542   46101 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1204 20:43:13.800548   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800552   46101 command_runner.go:130] >       "size": "92783513",
	I1204 20:43:13.800558   46101 command_runner.go:130] >       "uid": null,
	I1204 20:43:13.800562   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800567   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800572   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800578   46101 command_runner.go:130] >     },
	I1204 20:43:13.800581   46101 command_runner.go:130] >     {
	I1204 20:43:13.800589   46101 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1204 20:43:13.800594   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800601   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1204 20:43:13.800604   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800609   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800618   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1204 20:43:13.800627   46101 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1204 20:43:13.800634   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800639   46101 command_runner.go:130] >       "size": "68457798",
	I1204 20:43:13.800645   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800649   46101 command_runner.go:130] >         "value": "0"
	I1204 20:43:13.800655   46101 command_runner.go:130] >       },
	I1204 20:43:13.800659   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800665   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800669   46101 command_runner.go:130] >       "pinned": false
	I1204 20:43:13.800675   46101 command_runner.go:130] >     },
	I1204 20:43:13.800678   46101 command_runner.go:130] >     {
	I1204 20:43:13.800684   46101 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1204 20:43:13.800691   46101 command_runner.go:130] >       "repoTags": [
	I1204 20:43:13.800695   46101 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1204 20:43:13.800701   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800705   46101 command_runner.go:130] >       "repoDigests": [
	I1204 20:43:13.800714   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1204 20:43:13.800722   46101 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1204 20:43:13.800728   46101 command_runner.go:130] >       ],
	I1204 20:43:13.800732   46101 command_runner.go:130] >       "size": "742080",
	I1204 20:43:13.800739   46101 command_runner.go:130] >       "uid": {
	I1204 20:43:13.800742   46101 command_runner.go:130] >         "value": "65535"
	I1204 20:43:13.800748   46101 command_runner.go:130] >       },
	I1204 20:43:13.800752   46101 command_runner.go:130] >       "username": "",
	I1204 20:43:13.800758   46101 command_runner.go:130] >       "spec": null,
	I1204 20:43:13.800762   46101 command_runner.go:130] >       "pinned": true
	I1204 20:43:13.800765   46101 command_runner.go:130] >     }
	I1204 20:43:13.800771   46101 command_runner.go:130] >   ]
	I1204 20:43:13.800774   46101 command_runner.go:130] > }
	I1204 20:43:13.800884   46101 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 20:43:13.800895   46101 cache_images.go:84] Images are preloaded, skipping loading
	I1204 20:43:13.800902   46101 kubeadm.go:934] updating node { 192.168.39.127 8443 v1.31.2 crio true true} ...
	I1204 20:43:13.800990   46101 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-980367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:43:13.801065   46101 ssh_runner.go:195] Run: crio config
	I1204 20:43:13.840352   46101 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1204 20:43:13.840380   46101 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1204 20:43:13.840387   46101 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1204 20:43:13.840390   46101 command_runner.go:130] > #
	I1204 20:43:13.840397   46101 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1204 20:43:13.840403   46101 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1204 20:43:13.840409   46101 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1204 20:43:13.840416   46101 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1204 20:43:13.840419   46101 command_runner.go:130] > # reload'.
	I1204 20:43:13.840425   46101 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1204 20:43:13.840432   46101 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1204 20:43:13.840447   46101 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1204 20:43:13.840455   46101 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1204 20:43:13.840461   46101 command_runner.go:130] > [crio]
	I1204 20:43:13.840470   46101 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1204 20:43:13.840479   46101 command_runner.go:130] > # containers images, in this directory.
	I1204 20:43:13.840487   46101 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1204 20:43:13.840512   46101 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1204 20:43:13.840598   46101 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1204 20:43:13.840622   46101 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1204 20:43:13.840830   46101 command_runner.go:130] > # imagestore = ""
	I1204 20:43:13.840848   46101 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1204 20:43:13.840857   46101 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1204 20:43:13.840985   46101 command_runner.go:130] > storage_driver = "overlay"
	I1204 20:43:13.841009   46101 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1204 20:43:13.841020   46101 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1204 20:43:13.841028   46101 command_runner.go:130] > storage_option = [
	I1204 20:43:13.841146   46101 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1204 20:43:13.841191   46101 command_runner.go:130] > ]
	I1204 20:43:13.841207   46101 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1204 20:43:13.841235   46101 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1204 20:43:13.841576   46101 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1204 20:43:13.841592   46101 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1204 20:43:13.841602   46101 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1204 20:43:13.841609   46101 command_runner.go:130] > # always happen on a node reboot
	I1204 20:43:13.841989   46101 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1204 20:43:13.842014   46101 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1204 20:43:13.842024   46101 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1204 20:43:13.842031   46101 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1204 20:43:13.842113   46101 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1204 20:43:13.842124   46101 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1204 20:43:13.842135   46101 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1204 20:43:13.842334   46101 command_runner.go:130] > # internal_wipe = true
	I1204 20:43:13.842346   46101 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1204 20:43:13.842351   46101 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1204 20:43:13.842602   46101 command_runner.go:130] > # internal_repair = false
	I1204 20:43:13.842612   46101 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1204 20:43:13.842618   46101 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1204 20:43:13.842623   46101 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1204 20:43:13.842912   46101 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1204 20:43:13.842922   46101 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1204 20:43:13.842926   46101 command_runner.go:130] > [crio.api]
	I1204 20:43:13.842931   46101 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1204 20:43:13.843140   46101 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1204 20:43:13.843150   46101 command_runner.go:130] > # IP address on which the stream server will listen.
	I1204 20:43:13.843400   46101 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1204 20:43:13.843418   46101 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1204 20:43:13.843426   46101 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1204 20:43:13.843655   46101 command_runner.go:130] > # stream_port = "0"
	I1204 20:43:13.843665   46101 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1204 20:43:13.843970   46101 command_runner.go:130] > # stream_enable_tls = false
	I1204 20:43:13.843980   46101 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1204 20:43:13.844169   46101 command_runner.go:130] > # stream_idle_timeout = ""
	I1204 20:43:13.844191   46101 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1204 20:43:13.844201   46101 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1204 20:43:13.844207   46101 command_runner.go:130] > # minutes.
	I1204 20:43:13.844376   46101 command_runner.go:130] > # stream_tls_cert = ""
	I1204 20:43:13.844394   46101 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1204 20:43:13.844400   46101 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1204 20:43:13.844569   46101 command_runner.go:130] > # stream_tls_key = ""
	I1204 20:43:13.844588   46101 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1204 20:43:13.844598   46101 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1204 20:43:13.844615   46101 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1204 20:43:13.845014   46101 command_runner.go:130] > # stream_tls_ca = ""
	I1204 20:43:13.845038   46101 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1204 20:43:13.845047   46101 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1204 20:43:13.845057   46101 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1204 20:43:13.845065   46101 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1204 20:43:13.845075   46101 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1204 20:43:13.845085   46101 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1204 20:43:13.845094   46101 command_runner.go:130] > [crio.runtime]
	I1204 20:43:13.845104   46101 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1204 20:43:13.845116   46101 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1204 20:43:13.845123   46101 command_runner.go:130] > # "nofile=1024:2048"
	I1204 20:43:13.845134   46101 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1204 20:43:13.845144   46101 command_runner.go:130] > # default_ulimits = [
	I1204 20:43:13.845148   46101 command_runner.go:130] > # ]
	I1204 20:43:13.845156   46101 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1204 20:43:13.845161   46101 command_runner.go:130] > # no_pivot = false
	I1204 20:43:13.845169   46101 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1204 20:43:13.845179   46101 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1204 20:43:13.845187   46101 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1204 20:43:13.845200   46101 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1204 20:43:13.845215   46101 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1204 20:43:13.845229   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1204 20:43:13.845240   46101 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1204 20:43:13.845252   46101 command_runner.go:130] > # Cgroup setting for conmon
	I1204 20:43:13.845269   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1204 20:43:13.845279   46101 command_runner.go:130] > conmon_cgroup = "pod"
	I1204 20:43:13.845290   46101 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1204 20:43:13.845304   46101 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1204 20:43:13.845317   46101 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1204 20:43:13.845328   46101 command_runner.go:130] > conmon_env = [
	I1204 20:43:13.845337   46101 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1204 20:43:13.845355   46101 command_runner.go:130] > ]
	I1204 20:43:13.845367   46101 command_runner.go:130] > # Additional environment variables to set for all the
	I1204 20:43:13.845379   46101 command_runner.go:130] > # containers. These are overridden if set in the
	I1204 20:43:13.845391   46101 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1204 20:43:13.845399   46101 command_runner.go:130] > # default_env = [
	I1204 20:43:13.845416   46101 command_runner.go:130] > # ]
	I1204 20:43:13.845426   46101 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1204 20:43:13.845440   46101 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1204 20:43:13.845450   46101 command_runner.go:130] > # selinux = false
	I1204 20:43:13.845461   46101 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1204 20:43:13.845474   46101 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1204 20:43:13.845487   46101 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1204 20:43:13.845497   46101 command_runner.go:130] > # seccomp_profile = ""
	I1204 20:43:13.845506   46101 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1204 20:43:13.845519   46101 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1204 20:43:13.845532   46101 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1204 20:43:13.845543   46101 command_runner.go:130] > # which might increase security.
	I1204 20:43:13.845552   46101 command_runner.go:130] > # This option is currently deprecated,
	I1204 20:43:13.845564   46101 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1204 20:43:13.845580   46101 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1204 20:43:13.845593   46101 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1204 20:43:13.845604   46101 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1204 20:43:13.845619   46101 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1204 20:43:13.845632   46101 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1204 20:43:13.845644   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.845655   46101 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1204 20:43:13.845668   46101 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1204 20:43:13.845679   46101 command_runner.go:130] > # the cgroup blockio controller.
	I1204 20:43:13.845689   46101 command_runner.go:130] > # blockio_config_file = ""
	I1204 20:43:13.845700   46101 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1204 20:43:13.845710   46101 command_runner.go:130] > # blockio parameters.
	I1204 20:43:13.845717   46101 command_runner.go:130] > # blockio_reload = false
	I1204 20:43:13.845732   46101 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1204 20:43:13.845741   46101 command_runner.go:130] > # irqbalance daemon.
	I1204 20:43:13.845750   46101 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1204 20:43:13.845763   46101 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1204 20:43:13.845774   46101 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1204 20:43:13.845787   46101 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1204 20:43:13.845804   46101 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1204 20:43:13.845818   46101 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1204 20:43:13.845829   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.845840   46101 command_runner.go:130] > # rdt_config_file = ""
	I1204 20:43:13.845852   46101 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1204 20:43:13.845862   46101 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1204 20:43:13.845888   46101 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1204 20:43:13.845898   46101 command_runner.go:130] > # separate_pull_cgroup = ""
	I1204 20:43:13.845909   46101 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1204 20:43:13.845922   46101 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1204 20:43:13.845932   46101 command_runner.go:130] > # will be added.
	I1204 20:43:13.845938   46101 command_runner.go:130] > # default_capabilities = [
	I1204 20:43:13.845952   46101 command_runner.go:130] > # 	"CHOWN",
	I1204 20:43:13.845961   46101 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1204 20:43:13.845967   46101 command_runner.go:130] > # 	"FSETID",
	I1204 20:43:13.845974   46101 command_runner.go:130] > # 	"FOWNER",
	I1204 20:43:13.845983   46101 command_runner.go:130] > # 	"SETGID",
	I1204 20:43:13.845991   46101 command_runner.go:130] > # 	"SETUID",
	I1204 20:43:13.846001   46101 command_runner.go:130] > # 	"SETPCAP",
	I1204 20:43:13.846008   46101 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1204 20:43:13.846017   46101 command_runner.go:130] > # 	"KILL",
	I1204 20:43:13.846022   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846038   46101 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1204 20:43:13.846054   46101 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1204 20:43:13.846066   46101 command_runner.go:130] > # add_inheritable_capabilities = false
	I1204 20:43:13.846082   46101 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1204 20:43:13.846096   46101 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1204 20:43:13.846102   46101 command_runner.go:130] > default_sysctls = [
	I1204 20:43:13.846114   46101 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1204 20:43:13.846121   46101 command_runner.go:130] > ]
	I1204 20:43:13.846129   46101 command_runner.go:130] > # List of devices on the host that a
	I1204 20:43:13.846143   46101 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1204 20:43:13.846150   46101 command_runner.go:130] > # allowed_devices = [
	I1204 20:43:13.846160   46101 command_runner.go:130] > # 	"/dev/fuse",
	I1204 20:43:13.846165   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846173   46101 command_runner.go:130] > # List of additional devices. specified as
	I1204 20:43:13.846188   46101 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1204 20:43:13.846200   46101 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1204 20:43:13.846209   46101 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1204 20:43:13.846220   46101 command_runner.go:130] > # additional_devices = [
	I1204 20:43:13.846225   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846237   46101 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1204 20:43:13.846256   46101 command_runner.go:130] > # cdi_spec_dirs = [
	I1204 20:43:13.846266   46101 command_runner.go:130] > # 	"/etc/cdi",
	I1204 20:43:13.846273   46101 command_runner.go:130] > # 	"/var/run/cdi",
	I1204 20:43:13.846282   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846292   46101 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1204 20:43:13.846305   46101 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1204 20:43:13.846311   46101 command_runner.go:130] > # Defaults to false.
	I1204 20:43:13.846323   46101 command_runner.go:130] > # device_ownership_from_security_context = false
	I1204 20:43:13.846336   46101 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1204 20:43:13.846349   46101 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1204 20:43:13.846359   46101 command_runner.go:130] > # hooks_dir = [
	I1204 20:43:13.846366   46101 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1204 20:43:13.846375   46101 command_runner.go:130] > # ]
	I1204 20:43:13.846385   46101 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1204 20:43:13.846398   46101 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1204 20:43:13.846413   46101 command_runner.go:130] > # its default mounts from the following two files:
	I1204 20:43:13.846418   46101 command_runner.go:130] > #
	I1204 20:43:13.846433   46101 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1204 20:43:13.846446   46101 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1204 20:43:13.846459   46101 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1204 20:43:13.846467   46101 command_runner.go:130] > #
	I1204 20:43:13.846479   46101 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1204 20:43:13.846493   46101 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1204 20:43:13.846507   46101 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1204 20:43:13.846519   46101 command_runner.go:130] > #      only add mounts it finds in this file.
	I1204 20:43:13.846527   46101 command_runner.go:130] > #
	I1204 20:43:13.846535   46101 command_runner.go:130] > # default_mounts_file = ""
	I1204 20:43:13.846546   46101 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1204 20:43:13.846560   46101 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1204 20:43:13.846571   46101 command_runner.go:130] > pids_limit = 1024
	I1204 20:43:13.846582   46101 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1204 20:43:13.846595   46101 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1204 20:43:13.846609   46101 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1204 20:43:13.846626   46101 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1204 20:43:13.846636   46101 command_runner.go:130] > # log_size_max = -1
	I1204 20:43:13.846647   46101 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1204 20:43:13.846658   46101 command_runner.go:130] > # log_to_journald = false
	I1204 20:43:13.846668   46101 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1204 20:43:13.846679   46101 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1204 20:43:13.846695   46101 command_runner.go:130] > # Path to directory for container attach sockets.
	I1204 20:43:13.846708   46101 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1204 20:43:13.846719   46101 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1204 20:43:13.846730   46101 command_runner.go:130] > # bind_mount_prefix = ""
	I1204 20:43:13.846742   46101 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1204 20:43:13.846752   46101 command_runner.go:130] > # read_only = false
	I1204 20:43:13.846765   46101 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1204 20:43:13.846779   46101 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1204 20:43:13.846789   46101 command_runner.go:130] > # live configuration reload.
	I1204 20:43:13.846795   46101 command_runner.go:130] > # log_level = "info"
	I1204 20:43:13.846807   46101 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1204 20:43:13.846818   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.846828   46101 command_runner.go:130] > # log_filter = ""
	I1204 20:43:13.846839   46101 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1204 20:43:13.846854   46101 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1204 20:43:13.846864   46101 command_runner.go:130] > # separated by comma.
	I1204 20:43:13.846876   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.846886   46101 command_runner.go:130] > # uid_mappings = ""
	I1204 20:43:13.846896   46101 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1204 20:43:13.846908   46101 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1204 20:43:13.846917   46101 command_runner.go:130] > # separated by comma.
	I1204 20:43:13.846932   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.846942   46101 command_runner.go:130] > # gid_mappings = ""
	I1204 20:43:13.846952   46101 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1204 20:43:13.846966   46101 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1204 20:43:13.846983   46101 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1204 20:43:13.846999   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.847006   46101 command_runner.go:130] > # minimum_mappable_uid = -1
	I1204 20:43:13.847019   46101 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1204 20:43:13.847032   46101 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1204 20:43:13.847045   46101 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1204 20:43:13.847060   46101 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1204 20:43:13.847070   46101 command_runner.go:130] > # minimum_mappable_gid = -1
	I1204 20:43:13.847080   46101 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1204 20:43:13.847092   46101 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1204 20:43:13.847103   46101 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1204 20:43:13.847129   46101 command_runner.go:130] > # ctr_stop_timeout = 30
	I1204 20:43:13.847142   46101 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1204 20:43:13.847151   46101 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1204 20:43:13.847162   46101 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1204 20:43:13.847172   46101 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1204 20:43:13.847181   46101 command_runner.go:130] > drop_infra_ctr = false
	I1204 20:43:13.847190   46101 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1204 20:43:13.847197   46101 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1204 20:43:13.847206   46101 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1204 20:43:13.847213   46101 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1204 20:43:13.847220   46101 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1204 20:43:13.847229   46101 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1204 20:43:13.847237   46101 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1204 20:43:13.847244   46101 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1204 20:43:13.847251   46101 command_runner.go:130] > # shared_cpuset = ""
	I1204 20:43:13.847256   46101 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1204 20:43:13.847264   46101 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1204 20:43:13.847267   46101 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1204 20:43:13.847274   46101 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1204 20:43:13.847281   46101 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1204 20:43:13.847286   46101 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1204 20:43:13.847294   46101 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1204 20:43:13.847300   46101 command_runner.go:130] > # enable_criu_support = false
	I1204 20:43:13.847305   46101 command_runner.go:130] > # Enable/disable the generation of the container,
	I1204 20:43:13.847313   46101 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1204 20:43:13.847318   46101 command_runner.go:130] > # enable_pod_events = false
	I1204 20:43:13.847326   46101 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1204 20:43:13.847334   46101 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1204 20:43:13.847339   46101 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1204 20:43:13.847346   46101 command_runner.go:130] > # default_runtime = "runc"
	I1204 20:43:13.847352   46101 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1204 20:43:13.847361   46101 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1204 20:43:13.847389   46101 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1204 20:43:13.847407   46101 command_runner.go:130] > # creation as a file is not desired either.
	I1204 20:43:13.847417   46101 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1204 20:43:13.847427   46101 command_runner.go:130] > # the hostname is being managed dynamically.
	I1204 20:43:13.847434   46101 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1204 20:43:13.847437   46101 command_runner.go:130] > # ]
	I1204 20:43:13.847444   46101 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1204 20:43:13.847452   46101 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1204 20:43:13.847461   46101 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1204 20:43:13.847468   46101 command_runner.go:130] > # Each entry in the table should follow the format:
	I1204 20:43:13.847471   46101 command_runner.go:130] > #
	I1204 20:43:13.847478   46101 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1204 20:43:13.847482   46101 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1204 20:43:13.847508   46101 command_runner.go:130] > # runtime_type = "oci"
	I1204 20:43:13.847515   46101 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1204 20:43:13.847520   46101 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1204 20:43:13.847526   46101 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1204 20:43:13.847531   46101 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1204 20:43:13.847537   46101 command_runner.go:130] > # monitor_env = []
	I1204 20:43:13.847542   46101 command_runner.go:130] > # privileged_without_host_devices = false
	I1204 20:43:13.847548   46101 command_runner.go:130] > # allowed_annotations = []
	I1204 20:43:13.847553   46101 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1204 20:43:13.847559   46101 command_runner.go:130] > # Where:
	I1204 20:43:13.847565   46101 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1204 20:43:13.847573   46101 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1204 20:43:13.847581   46101 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1204 20:43:13.847587   46101 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1204 20:43:13.847594   46101 command_runner.go:130] > #   in $PATH.
	I1204 20:43:13.847601   46101 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1204 20:43:13.847608   46101 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1204 20:43:13.847614   46101 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1204 20:43:13.847620   46101 command_runner.go:130] > #   state.
	I1204 20:43:13.847628   46101 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1204 20:43:13.847636   46101 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1204 20:43:13.847645   46101 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1204 20:43:13.847652   46101 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1204 20:43:13.847658   46101 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1204 20:43:13.847666   46101 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1204 20:43:13.847673   46101 command_runner.go:130] > #   The currently recognized values are:
	I1204 20:43:13.847679   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1204 20:43:13.847688   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1204 20:43:13.847699   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1204 20:43:13.847707   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1204 20:43:13.847716   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1204 20:43:13.847725   46101 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1204 20:43:13.847734   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1204 20:43:13.847742   46101 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1204 20:43:13.847748   46101 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1204 20:43:13.847756   46101 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1204 20:43:13.847766   46101 command_runner.go:130] > #   deprecated option "conmon".
	I1204 20:43:13.847781   46101 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1204 20:43:13.847792   46101 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1204 20:43:13.847805   46101 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1204 20:43:13.847816   46101 command_runner.go:130] > #   should be moved to the container's cgroup
	I1204 20:43:13.847831   46101 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1204 20:43:13.847842   46101 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1204 20:43:13.847856   46101 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1204 20:43:13.847867   46101 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1204 20:43:13.847875   46101 command_runner.go:130] > #
	I1204 20:43:13.847882   46101 command_runner.go:130] > # Using the seccomp notifier feature:
	I1204 20:43:13.847888   46101 command_runner.go:130] > #
	I1204 20:43:13.847895   46101 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1204 20:43:13.847904   46101 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1204 20:43:13.847909   46101 command_runner.go:130] > #
	I1204 20:43:13.847915   46101 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1204 20:43:13.847923   46101 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1204 20:43:13.847929   46101 command_runner.go:130] > #
	I1204 20:43:13.847937   46101 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1204 20:43:13.847943   46101 command_runner.go:130] > # feature.
	I1204 20:43:13.847946   46101 command_runner.go:130] > #
	I1204 20:43:13.847955   46101 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1204 20:43:13.847964   46101 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1204 20:43:13.847970   46101 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1204 20:43:13.847979   46101 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1204 20:43:13.847987   46101 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1204 20:43:13.847993   46101 command_runner.go:130] > #
	I1204 20:43:13.847999   46101 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1204 20:43:13.848010   46101 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1204 20:43:13.848017   46101 command_runner.go:130] > #
	I1204 20:43:13.848027   46101 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1204 20:43:13.848035   46101 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1204 20:43:13.848041   46101 command_runner.go:130] > #
	I1204 20:43:13.848049   46101 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1204 20:43:13.848061   46101 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1204 20:43:13.848070   46101 command_runner.go:130] > # limitation.
	I1204 20:43:13.848080   46101 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1204 20:43:13.848086   46101 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1204 20:43:13.848095   46101 command_runner.go:130] > runtime_type = "oci"
	I1204 20:43:13.848105   46101 command_runner.go:130] > runtime_root = "/run/runc"
	I1204 20:43:13.848113   46101 command_runner.go:130] > runtime_config_path = ""
	I1204 20:43:13.848120   46101 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1204 20:43:13.848124   46101 command_runner.go:130] > monitor_cgroup = "pod"
	I1204 20:43:13.848132   46101 command_runner.go:130] > monitor_exec_cgroup = ""
	I1204 20:43:13.848136   46101 command_runner.go:130] > monitor_env = [
	I1204 20:43:13.848142   46101 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1204 20:43:13.848145   46101 command_runner.go:130] > ]
	I1204 20:43:13.848153   46101 command_runner.go:130] > privileged_without_host_devices = false
	I1204 20:43:13.848160   46101 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1204 20:43:13.848168   46101 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1204 20:43:13.848175   46101 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1204 20:43:13.848182   46101 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1204 20:43:13.848194   46101 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1204 20:43:13.848201   46101 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1204 20:43:13.848212   46101 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1204 20:43:13.848222   46101 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1204 20:43:13.848230   46101 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1204 20:43:13.848236   46101 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1204 20:43:13.848240   46101 command_runner.go:130] > # Example:
	I1204 20:43:13.848249   46101 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1204 20:43:13.848254   46101 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1204 20:43:13.848258   46101 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1204 20:43:13.848263   46101 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1204 20:43:13.848267   46101 command_runner.go:130] > # cpuset = 0
	I1204 20:43:13.848270   46101 command_runner.go:130] > # cpushares = "0-1"
	I1204 20:43:13.848274   46101 command_runner.go:130] > # Where:
	I1204 20:43:13.848281   46101 command_runner.go:130] > # The workload name is workload-type.
	I1204 20:43:13.848288   46101 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1204 20:43:13.848293   46101 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1204 20:43:13.848298   46101 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1204 20:43:13.848305   46101 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1204 20:43:13.848310   46101 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1204 20:43:13.848314   46101 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1204 20:43:13.848321   46101 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1204 20:43:13.848327   46101 command_runner.go:130] > # Default value is set to true
	I1204 20:43:13.848332   46101 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1204 20:43:13.848337   46101 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1204 20:43:13.848343   46101 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1204 20:43:13.848347   46101 command_runner.go:130] > # Default value is set to 'false'
	I1204 20:43:13.848354   46101 command_runner.go:130] > # disable_hostport_mapping = false
	I1204 20:43:13.848360   46101 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1204 20:43:13.848365   46101 command_runner.go:130] > #
	I1204 20:43:13.848371   46101 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1204 20:43:13.848379   46101 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1204 20:43:13.848388   46101 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1204 20:43:13.848394   46101 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1204 20:43:13.848406   46101 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1204 20:43:13.848412   46101 command_runner.go:130] > [crio.image]
	I1204 20:43:13.848418   46101 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1204 20:43:13.848425   46101 command_runner.go:130] > # default_transport = "docker://"
	I1204 20:43:13.848431   46101 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1204 20:43:13.848439   46101 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1204 20:43:13.848445   46101 command_runner.go:130] > # global_auth_file = ""
	I1204 20:43:13.848450   46101 command_runner.go:130] > # The image used to instantiate infra containers.
	I1204 20:43:13.848457   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.848461   46101 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1204 20:43:13.848470   46101 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1204 20:43:13.848477   46101 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1204 20:43:13.848482   46101 command_runner.go:130] > # This option supports live configuration reload.
	I1204 20:43:13.848489   46101 command_runner.go:130] > # pause_image_auth_file = ""
	I1204 20:43:13.848495   46101 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1204 20:43:13.848503   46101 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1204 20:43:13.848515   46101 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1204 20:43:13.848527   46101 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1204 20:43:13.848537   46101 command_runner.go:130] > # pause_command = "/pause"
	I1204 20:43:13.848548   46101 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1204 20:43:13.848560   46101 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1204 20:43:13.848571   46101 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1204 20:43:13.848583   46101 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1204 20:43:13.848595   46101 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1204 20:43:13.848607   46101 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1204 20:43:13.848617   46101 command_runner.go:130] > # pinned_images = [
	I1204 20:43:13.848625   46101 command_runner.go:130] > # ]
	I1204 20:43:13.848635   46101 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1204 20:43:13.848647   46101 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1204 20:43:13.848660   46101 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1204 20:43:13.848675   46101 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1204 20:43:13.848686   46101 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1204 20:43:13.848697   46101 command_runner.go:130] > # signature_policy = ""
	I1204 20:43:13.848705   46101 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1204 20:43:13.848723   46101 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1204 20:43:13.848738   46101 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1204 20:43:13.848751   46101 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1204 20:43:13.848760   46101 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1204 20:43:13.848768   46101 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1204 20:43:13.848781   46101 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1204 20:43:13.848794   46101 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1204 20:43:13.848804   46101 command_runner.go:130] > # changing them here.
	I1204 20:43:13.848812   46101 command_runner.go:130] > # insecure_registries = [
	I1204 20:43:13.848821   46101 command_runner.go:130] > # ]
	I1204 20:43:13.848831   46101 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1204 20:43:13.848842   46101 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1204 20:43:13.848852   46101 command_runner.go:130] > # image_volumes = "mkdir"
	I1204 20:43:13.848863   46101 command_runner.go:130] > # Temporary directory to use for storing big files
	I1204 20:43:13.848874   46101 command_runner.go:130] > # big_files_temporary_dir = ""
	I1204 20:43:13.848886   46101 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1204 20:43:13.848894   46101 command_runner.go:130] > # CNI plugins.
	I1204 20:43:13.848905   46101 command_runner.go:130] > [crio.network]
	I1204 20:43:13.848918   46101 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1204 20:43:13.848935   46101 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1204 20:43:13.848945   46101 command_runner.go:130] > # cni_default_network = ""
	I1204 20:43:13.848956   46101 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1204 20:43:13.848966   46101 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1204 20:43:13.848978   46101 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1204 20:43:13.848987   46101 command_runner.go:130] > # plugin_dirs = [
	I1204 20:43:13.848993   46101 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1204 20:43:13.848997   46101 command_runner.go:130] > # ]
	I1204 20:43:13.849005   46101 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1204 20:43:13.849009   46101 command_runner.go:130] > [crio.metrics]
	I1204 20:43:13.849016   46101 command_runner.go:130] > # Globally enable or disable metrics support.
	I1204 20:43:13.849022   46101 command_runner.go:130] > enable_metrics = true
	I1204 20:43:13.849027   46101 command_runner.go:130] > # Specify enabled metrics collectors.
	I1204 20:43:13.849032   46101 command_runner.go:130] > # Per default all metrics are enabled.
	I1204 20:43:13.849038   46101 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1204 20:43:13.849047   46101 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1204 20:43:13.849052   46101 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1204 20:43:13.849059   46101 command_runner.go:130] > # metrics_collectors = [
	I1204 20:43:13.849062   46101 command_runner.go:130] > # 	"operations",
	I1204 20:43:13.849067   46101 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1204 20:43:13.849074   46101 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1204 20:43:13.849080   46101 command_runner.go:130] > # 	"operations_errors",
	I1204 20:43:13.849087   46101 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1204 20:43:13.849091   46101 command_runner.go:130] > # 	"image_pulls_by_name",
	I1204 20:43:13.849097   46101 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1204 20:43:13.849102   46101 command_runner.go:130] > # 	"image_pulls_failures",
	I1204 20:43:13.849108   46101 command_runner.go:130] > # 	"image_pulls_successes",
	I1204 20:43:13.849112   46101 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1204 20:43:13.849118   46101 command_runner.go:130] > # 	"image_layer_reuse",
	I1204 20:43:13.849123   46101 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1204 20:43:13.849129   46101 command_runner.go:130] > # 	"containers_oom_total",
	I1204 20:43:13.849133   46101 command_runner.go:130] > # 	"containers_oom",
	I1204 20:43:13.849139   46101 command_runner.go:130] > # 	"processes_defunct",
	I1204 20:43:13.849143   46101 command_runner.go:130] > # 	"operations_total",
	I1204 20:43:13.849147   46101 command_runner.go:130] > # 	"operations_latency_seconds",
	I1204 20:43:13.849155   46101 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1204 20:43:13.849162   46101 command_runner.go:130] > # 	"operations_errors_total",
	I1204 20:43:13.849166   46101 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1204 20:43:13.849173   46101 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1204 20:43:13.849177   46101 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1204 20:43:13.849184   46101 command_runner.go:130] > # 	"image_pulls_success_total",
	I1204 20:43:13.849188   46101 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1204 20:43:13.849194   46101 command_runner.go:130] > # 	"containers_oom_count_total",
	I1204 20:43:13.849200   46101 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1204 20:43:13.849207   46101 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1204 20:43:13.849213   46101 command_runner.go:130] > # ]
	I1204 20:43:13.849220   46101 command_runner.go:130] > # The port on which the metrics server will listen.
	I1204 20:43:13.849224   46101 command_runner.go:130] > # metrics_port = 9090
	I1204 20:43:13.849231   46101 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1204 20:43:13.849235   46101 command_runner.go:130] > # metrics_socket = ""
	I1204 20:43:13.849242   46101 command_runner.go:130] > # The certificate for the secure metrics server.
	I1204 20:43:13.849247   46101 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1204 20:43:13.849255   46101 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1204 20:43:13.849261   46101 command_runner.go:130] > # certificate on any modification event.
	I1204 20:43:13.849265   46101 command_runner.go:130] > # metrics_cert = ""
	I1204 20:43:13.849272   46101 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1204 20:43:13.849277   46101 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1204 20:43:13.849282   46101 command_runner.go:130] > # metrics_key = ""
	I1204 20:43:13.849288   46101 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1204 20:43:13.849294   46101 command_runner.go:130] > [crio.tracing]
	I1204 20:43:13.849300   46101 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1204 20:43:13.849306   46101 command_runner.go:130] > # enable_tracing = false
	I1204 20:43:13.849312   46101 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1204 20:43:13.849319   46101 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1204 20:43:13.849325   46101 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1204 20:43:13.849334   46101 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1204 20:43:13.849340   46101 command_runner.go:130] > # CRI-O NRI configuration.
	I1204 20:43:13.849344   46101 command_runner.go:130] > [crio.nri]
	I1204 20:43:13.849350   46101 command_runner.go:130] > # Globally enable or disable NRI.
	I1204 20:43:13.849354   46101 command_runner.go:130] > # enable_nri = false
	I1204 20:43:13.849361   46101 command_runner.go:130] > # NRI socket to listen on.
	I1204 20:43:13.849365   46101 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1204 20:43:13.849372   46101 command_runner.go:130] > # NRI plugin directory to use.
	I1204 20:43:13.849377   46101 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1204 20:43:13.849384   46101 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1204 20:43:13.849388   46101 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1204 20:43:13.849395   46101 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1204 20:43:13.849407   46101 command_runner.go:130] > # nri_disable_connections = false
	I1204 20:43:13.849414   46101 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1204 20:43:13.849420   46101 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1204 20:43:13.849425   46101 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1204 20:43:13.849432   46101 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1204 20:43:13.849439   46101 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1204 20:43:13.849445   46101 command_runner.go:130] > [crio.stats]
	I1204 20:43:13.849450   46101 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1204 20:43:13.849458   46101 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1204 20:43:13.849462   46101 command_runner.go:130] > # stats_collection_period = 0
	I1204 20:43:13.849484   46101 command_runner.go:130] ! time="2024-12-04 20:43:13.798105321Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1204 20:43:13.849500   46101 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1204 20:43:13.849600   46101 cni.go:84] Creating CNI manager for ""
	I1204 20:43:13.849613   46101 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1204 20:43:13.849621   46101 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:43:13.849641   46101 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-980367 NodeName:multinode-980367 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:43:13.849751   46101 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-980367"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.127"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:43:13.849824   46101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 20:43:13.859635   46101 command_runner.go:130] > kubeadm
	I1204 20:43:13.859650   46101 command_runner.go:130] > kubectl
	I1204 20:43:13.859654   46101 command_runner.go:130] > kubelet
	I1204 20:43:13.859670   46101 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:43:13.859722   46101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 20:43:13.868835   46101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1204 20:43:13.885116   46101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:43:13.900825   46101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1204 20:43:13.916894   46101 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I1204 20:43:13.920665   46101 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I1204 20:43:13.920729   46101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:43:14.058246   46101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:43:14.073104   46101 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367 for IP: 192.168.39.127
	I1204 20:43:14.073132   46101 certs.go:194] generating shared ca certs ...
	I1204 20:43:14.073152   46101 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:43:14.073337   46101 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:43:14.073399   46101 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:43:14.073413   46101 certs.go:256] generating profile certs ...
	I1204 20:43:14.073507   46101 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/client.key
	I1204 20:43:14.073590   46101 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key.dd041cb4
	I1204 20:43:14.073647   46101 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key
	I1204 20:43:14.073660   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 20:43:14.073680   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 20:43:14.073700   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 20:43:14.073723   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 20:43:14.073742   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 20:43:14.073762   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 20:43:14.073782   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 20:43:14.073813   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 20:43:14.073882   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:43:14.073923   46101 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:43:14.073940   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:43:14.073974   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:43:14.074007   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:43:14.074039   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:43:14.074095   46101 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:43:14.074134   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.074158   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.074184   46101 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem -> /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.074782   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:43:14.101173   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:43:14.132530   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:43:14.155556   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:43:14.178369   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 20:43:14.200837   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 20:43:14.223065   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:43:14.245554   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/multinode-980367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 20:43:14.266657   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:43:14.288083   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:43:14.310581   46101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:43:14.331475   46101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:43:14.346690   46101 ssh_runner.go:195] Run: openssl version
	I1204 20:43:14.352034   46101 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1204 20:43:14.352119   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:43:14.361484   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365420   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365443   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.365473   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:43:14.370475   46101 command_runner.go:130] > 51391683
	I1204 20:43:14.370517   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:43:14.378666   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:43:14.388081   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.391950   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.392007   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.392044   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:43:14.396863   46101 command_runner.go:130] > 3ec20f2e
	I1204 20:43:14.397064   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:43:14.405264   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:43:14.414656   46101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418585   46101 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418648   46101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.418687   46101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:43:14.423871   46101 command_runner.go:130] > b5213941
	I1204 20:43:14.423923   46101 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:43:14.432360   46101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:43:14.436314   46101 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:43:14.436330   46101 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1204 20:43:14.436336   46101 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1204 20:43:14.436342   46101 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1204 20:43:14.436351   46101 command_runner.go:130] > Access: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436355   46101 command_runner.go:130] > Modify: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436360   46101 command_runner.go:130] > Change: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436367   46101 command_runner.go:130] >  Birth: 2024-12-04 20:36:21.503818560 +0000
	I1204 20:43:14.436503   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 20:43:14.441876   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.441922   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 20:43:14.446947   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.447244   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 20:43:14.452238   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.452285   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 20:43:14.457190   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.457243   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 20:43:14.462221   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.462276   46101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 20:43:14.467275   46101 command_runner.go:130] > Certificate will not expire
	I1204 20:43:14.467323   46101 kubeadm.go:392] StartCluster: {Name:multinode-980367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-980367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:43:14.467469   46101 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:43:14.467528   46101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:43:14.503140   46101 command_runner.go:130] > 077fccc1f632ca852f24db7b5953f09de1c43112bb437fa5bdd91ac2daa9bee0
	I1204 20:43:14.503166   46101 command_runner.go:130] > 5d654f9cdac10f47aeef6df7485cd8c7f1f6d5a8c76ccbe1e687dd980c39491d
	I1204 20:43:14.503177   46101 command_runner.go:130] > efa8b788446b5387a9837090d5a65fd1bc71871437e3db37b5bc2dd2d5922f87
	I1204 20:43:14.503193   46101 command_runner.go:130] > a82c6aaac37b0760734674325ad3191b0c69fafe3d652d39ecdec503e8f0dc99
	I1204 20:43:14.503204   46101 command_runner.go:130] > f14af9f1a148e7fd69cd047e47b95ff063322bec1bb9165e0e459475e160ed15
	I1204 20:43:14.503214   46101 command_runner.go:130] > 84f732075a321ebc600dd924ae96154ee708dbe3e7cdfc210086ac47b367cac4
	I1204 20:43:14.503224   46101 command_runner.go:130] > 711a96b7c814bede7e66aff6b57ea4b2aa827e45996ec59e5e9eae96fad83860
	I1204 20:43:14.503240   46101 command_runner.go:130] > 70c10cf60e07a7c5402ef2f6d04b1a921902d8c8b070391a06d3fc3c14ce1a69
	I1204 20:43:14.503250   46101 command_runner.go:130] > 74ed511efd3429f00bdb97c64fcbb18681ed16d20976ffd0ec07c5c9f0406611
	I1204 20:43:14.503274   46101 cri.go:89] found id: "077fccc1f632ca852f24db7b5953f09de1c43112bb437fa5bdd91ac2daa9bee0"
	I1204 20:43:14.503287   46101 cri.go:89] found id: "5d654f9cdac10f47aeef6df7485cd8c7f1f6d5a8c76ccbe1e687dd980c39491d"
	I1204 20:43:14.503295   46101 cri.go:89] found id: "efa8b788446b5387a9837090d5a65fd1bc71871437e3db37b5bc2dd2d5922f87"
	I1204 20:43:14.503301   46101 cri.go:89] found id: "a82c6aaac37b0760734674325ad3191b0c69fafe3d652d39ecdec503e8f0dc99"
	I1204 20:43:14.503308   46101 cri.go:89] found id: "f14af9f1a148e7fd69cd047e47b95ff063322bec1bb9165e0e459475e160ed15"
	I1204 20:43:14.503313   46101 cri.go:89] found id: "84f732075a321ebc600dd924ae96154ee708dbe3e7cdfc210086ac47b367cac4"
	I1204 20:43:14.503320   46101 cri.go:89] found id: "711a96b7c814bede7e66aff6b57ea4b2aa827e45996ec59e5e9eae96fad83860"
	I1204 20:43:14.503325   46101 cri.go:89] found id: "70c10cf60e07a7c5402ef2f6d04b1a921902d8c8b070391a06d3fc3c14ce1a69"
	I1204 20:43:14.503332   46101 cri.go:89] found id: "74ed511efd3429f00bdb97c64fcbb18681ed16d20976ffd0ec07c5c9f0406611"
	I1204 20:43:14.503341   46101 cri.go:89] found id: ""
	I1204 20:43:14.503405   46101 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-980367 -n multinode-980367
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-980367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.99s)

                                                
                                    
x
+
TestPreload (168.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-464116 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1204 20:52:26.277979   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-464116 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.885914818s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-464116 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-464116 image pull gcr.io/k8s-minikube/busybox: (2.530023962s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-464116
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-464116: (6.551078669s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-464116 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-464116 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (58.616804955s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-464116 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-04 20:54:02.340656964 +0000 UTC m=+3691.440385393
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-464116 -n test-preload-464116
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-464116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-464116 logs -n 25: (1.004750099s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367 sudo cat                                       | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt                       | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m02:/home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n                                                                 | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | multinode-980367-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-980367 ssh -n multinode-980367-m02 sudo cat                                   | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-980367 node stop m03                                                          | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:38 UTC |
	| node    | multinode-980367 node start                                                             | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:38 UTC | 04 Dec 24 20:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| stop    | -p multinode-980367                                                                     | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:39 UTC |                     |
	| start   | -p multinode-980367                                                                     | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:41 UTC | 04 Dec 24 20:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:44 UTC |                     |
	| node    | multinode-980367 node delete                                                            | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:45 UTC | 04 Dec 24 20:45 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-980367 stop                                                                   | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:45 UTC |                     |
	| start   | -p multinode-980367                                                                     | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:47 UTC | 04 Dec 24 20:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-980367                                                                | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:50 UTC |                     |
	| start   | -p multinode-980367-m02                                                                 | multinode-980367-m02 | jenkins | v1.34.0 | 04 Dec 24 20:50 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-980367-m03                                                                 | multinode-980367-m03 | jenkins | v1.34.0 | 04 Dec 24 20:50 UTC | 04 Dec 24 20:51 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-980367                                                                 | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:51 UTC |                     |
	| delete  | -p multinode-980367-m03                                                                 | multinode-980367-m03 | jenkins | v1.34.0 | 04 Dec 24 20:51 UTC | 04 Dec 24 20:51 UTC |
	| delete  | -p multinode-980367                                                                     | multinode-980367     | jenkins | v1.34.0 | 04 Dec 24 20:51 UTC | 04 Dec 24 20:51 UTC |
	| start   | -p test-preload-464116                                                                  | test-preload-464116  | jenkins | v1.34.0 | 04 Dec 24 20:51 UTC | 04 Dec 24 20:52 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-464116 image pull                                                          | test-preload-464116  | jenkins | v1.34.0 | 04 Dec 24 20:52 UTC | 04 Dec 24 20:52 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-464116                                                                  | test-preload-464116  | jenkins | v1.34.0 | 04 Dec 24 20:52 UTC | 04 Dec 24 20:53 UTC |
	| start   | -p test-preload-464116                                                                  | test-preload-464116  | jenkins | v1.34.0 | 04 Dec 24 20:53 UTC | 04 Dec 24 20:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-464116 image list                                                          | test-preload-464116  | jenkins | v1.34.0 | 04 Dec 24 20:54 UTC | 04 Dec 24 20:54 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 20:53:03
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 20:53:03.553376   50426 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:53:03.553478   50426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:53:03.553487   50426 out.go:358] Setting ErrFile to fd 2...
	I1204 20:53:03.553491   50426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:53:03.553645   50426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:53:03.554172   50426 out.go:352] Setting JSON to false
	I1204 20:53:03.555016   50426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5734,"bootTime":1733339850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:53:03.555122   50426 start.go:139] virtualization: kvm guest
	I1204 20:53:03.557274   50426 out.go:177] * [test-preload-464116] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:53:03.558541   50426 notify.go:220] Checking for updates...
	I1204 20:53:03.558558   50426 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:53:03.559789   50426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:53:03.560951   50426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:53:03.562074   50426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:53:03.563092   50426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:53:03.564298   50426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:53:03.565796   50426 config.go:182] Loaded profile config "test-preload-464116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1204 20:53:03.566174   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:03.566232   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:03.581112   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I1204 20:53:03.581640   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:03.582248   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:03.582271   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:03.582576   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:03.582785   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:03.584411   50426 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 20:53:03.585498   50426 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:53:03.585814   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:03.585847   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:03.600213   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I1204 20:53:03.600562   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:03.601034   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:03.601060   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:03.601346   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:03.601514   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:03.635875   50426 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:53:03.637268   50426 start.go:297] selected driver: kvm2
	I1204 20:53:03.637279   50426 start.go:901] validating driver "kvm2" against &{Name:test-preload-464116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-464116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:53:03.637438   50426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:53:03.638123   50426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:53:03.638195   50426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:53:03.652774   50426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:53:03.653130   50426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:53:03.653160   50426 cni.go:84] Creating CNI manager for ""
	I1204 20:53:03.653202   50426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 20:53:03.653263   50426 start.go:340] cluster config:
	{Name:test-preload-464116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-464116 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:53:03.653354   50426 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:53:03.654846   50426 out.go:177] * Starting "test-preload-464116" primary control-plane node in "test-preload-464116" cluster
	I1204 20:53:03.655911   50426 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1204 20:53:03.685904   50426 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1204 20:53:03.685938   50426 cache.go:56] Caching tarball of preloaded images
	I1204 20:53:03.686083   50426 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1204 20:53:03.687781   50426 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1204 20:53:03.689004   50426 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1204 20:53:03.718338   50426 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1204 20:53:07.101355   50426 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1204 20:53:07.101450   50426 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1204 20:53:07.938834   50426 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1204 20:53:07.938954   50426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/config.json ...
	I1204 20:53:07.939201   50426 start.go:360] acquireMachinesLock for test-preload-464116: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:53:07.939270   50426 start.go:364] duration metric: took 47.968µs to acquireMachinesLock for "test-preload-464116"
	I1204 20:53:07.939292   50426 start.go:96] Skipping create...Using existing machine configuration
	I1204 20:53:07.939299   50426 fix.go:54] fixHost starting: 
	I1204 20:53:07.939597   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:07.939638   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:07.954394   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I1204 20:53:07.954865   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:07.955400   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:07.955431   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:07.955780   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:07.955970   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:07.956134   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetState
	I1204 20:53:07.957650   50426 fix.go:112] recreateIfNeeded on test-preload-464116: state=Stopped err=<nil>
	I1204 20:53:07.957675   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	W1204 20:53:07.957802   50426 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 20:53:07.959663   50426 out.go:177] * Restarting existing kvm2 VM for "test-preload-464116" ...
	I1204 20:53:07.960795   50426 main.go:141] libmachine: (test-preload-464116) Calling .Start
	I1204 20:53:07.960970   50426 main.go:141] libmachine: (test-preload-464116) Ensuring networks are active...
	I1204 20:53:07.961782   50426 main.go:141] libmachine: (test-preload-464116) Ensuring network default is active
	I1204 20:53:07.962036   50426 main.go:141] libmachine: (test-preload-464116) Ensuring network mk-test-preload-464116 is active
	I1204 20:53:07.962354   50426 main.go:141] libmachine: (test-preload-464116) Getting domain xml...
	I1204 20:53:07.963392   50426 main.go:141] libmachine: (test-preload-464116) Creating domain...
	I1204 20:53:09.151531   50426 main.go:141] libmachine: (test-preload-464116) Waiting to get IP...
	I1204 20:53:09.152422   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:09.152797   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:09.152865   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:09.152791   50479 retry.go:31] will retry after 209.53632ms: waiting for machine to come up
	I1204 20:53:09.364316   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:09.364763   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:09.364792   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:09.364722   50479 retry.go:31] will retry after 386.615195ms: waiting for machine to come up
	I1204 20:53:09.753381   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:09.753784   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:09.753815   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:09.753738   50479 retry.go:31] will retry after 481.067044ms: waiting for machine to come up
	I1204 20:53:10.236310   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:10.236713   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:10.236736   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:10.236675   50479 retry.go:31] will retry after 593.013547ms: waiting for machine to come up
	I1204 20:53:10.831482   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:10.832041   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:10.832078   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:10.831969   50479 retry.go:31] will retry after 603.324681ms: waiting for machine to come up
	I1204 20:53:11.436714   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:11.437124   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:11.437147   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:11.437084   50479 retry.go:31] will retry after 754.705102ms: waiting for machine to come up
	I1204 20:53:12.193040   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:12.193408   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:12.193440   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:12.193363   50479 retry.go:31] will retry after 860.064025ms: waiting for machine to come up
	I1204 20:53:13.055037   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:13.055513   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:13.055566   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:13.055487   50479 retry.go:31] will retry after 946.295306ms: waiting for machine to come up
	I1204 20:53:14.002908   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:14.003339   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:14.003387   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:14.003279   50479 retry.go:31] will retry after 1.591961001s: waiting for machine to come up
	I1204 20:53:15.596968   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:15.597429   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:15.597454   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:15.597400   50479 retry.go:31] will retry after 2.03077494s: waiting for machine to come up
	I1204 20:53:17.629424   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:17.629831   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:17.629858   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:17.629797   50479 retry.go:31] will retry after 2.424988601s: waiting for machine to come up
	I1204 20:53:20.057804   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:20.058265   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:20.058287   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:20.058224   50479 retry.go:31] will retry after 2.257251045s: waiting for machine to come up
	I1204 20:53:22.318566   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:22.318920   50426 main.go:141] libmachine: (test-preload-464116) DBG | unable to find current IP address of domain test-preload-464116 in network mk-test-preload-464116
	I1204 20:53:22.318969   50426 main.go:141] libmachine: (test-preload-464116) DBG | I1204 20:53:22.318892   50479 retry.go:31] will retry after 2.964437996s: waiting for machine to come up
	I1204 20:53:25.286322   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.286702   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has current primary IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.286722   50426 main.go:141] libmachine: (test-preload-464116) Found IP for machine: 192.168.39.6
	I1204 20:53:25.286736   50426 main.go:141] libmachine: (test-preload-464116) Reserving static IP address...
	I1204 20:53:25.287126   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "test-preload-464116", mac: "52:54:00:5d:b1:3e", ip: "192.168.39.6"} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.287159   50426 main.go:141] libmachine: (test-preload-464116) DBG | skip adding static IP to network mk-test-preload-464116 - found existing host DHCP lease matching {name: "test-preload-464116", mac: "52:54:00:5d:b1:3e", ip: "192.168.39.6"}
	I1204 20:53:25.287186   50426 main.go:141] libmachine: (test-preload-464116) Reserved static IP address: 192.168.39.6
	I1204 20:53:25.287200   50426 main.go:141] libmachine: (test-preload-464116) DBG | Getting to WaitForSSH function...
	I1204 20:53:25.287216   50426 main.go:141] libmachine: (test-preload-464116) Waiting for SSH to be available...
	I1204 20:53:25.289078   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.289379   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.289406   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.289527   50426 main.go:141] libmachine: (test-preload-464116) DBG | Using SSH client type: external
	I1204 20:53:25.289548   50426 main.go:141] libmachine: (test-preload-464116) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa (-rw-------)
	I1204 20:53:25.289580   50426 main.go:141] libmachine: (test-preload-464116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:53:25.289593   50426 main.go:141] libmachine: (test-preload-464116) DBG | About to run SSH command:
	I1204 20:53:25.289606   50426 main.go:141] libmachine: (test-preload-464116) DBG | exit 0
	I1204 20:53:25.411134   50426 main.go:141] libmachine: (test-preload-464116) DBG | SSH cmd err, output: <nil>: 
	I1204 20:53:25.411664   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetConfigRaw
	I1204 20:53:25.412389   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetIP
	I1204 20:53:25.415111   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.415510   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.415544   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.415839   50426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/config.json ...
	I1204 20:53:25.416104   50426 machine.go:93] provisionDockerMachine start ...
	I1204 20:53:25.416132   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:25.416349   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:25.418678   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.418989   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.419015   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.419176   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:25.419348   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.419535   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.419686   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:25.419828   50426 main.go:141] libmachine: Using SSH client type: native
	I1204 20:53:25.420010   50426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1204 20:53:25.420024   50426 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 20:53:25.515607   50426 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 20:53:25.515637   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetMachineName
	I1204 20:53:25.515880   50426 buildroot.go:166] provisioning hostname "test-preload-464116"
	I1204 20:53:25.515915   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetMachineName
	I1204 20:53:25.516090   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:25.518703   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.519099   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.519130   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.519314   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:25.519520   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.519693   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.519830   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:25.519967   50426 main.go:141] libmachine: Using SSH client type: native
	I1204 20:53:25.520190   50426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1204 20:53:25.520208   50426 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-464116 && echo "test-preload-464116" | sudo tee /etc/hostname
	I1204 20:53:25.632862   50426 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-464116
	
	I1204 20:53:25.632889   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:25.635657   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.636043   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.636078   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.636220   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:25.636405   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.636532   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.636669   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:25.636785   50426 main.go:141] libmachine: Using SSH client type: native
	I1204 20:53:25.636994   50426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1204 20:53:25.637011   50426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-464116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-464116/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-464116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:53:25.747881   50426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:53:25.747910   50426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:53:25.747928   50426 buildroot.go:174] setting up certificates
	I1204 20:53:25.747936   50426 provision.go:84] configureAuth start
	I1204 20:53:25.747944   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetMachineName
	I1204 20:53:25.748242   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetIP
	I1204 20:53:25.750754   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.751126   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.751167   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.751239   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:25.753402   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.753716   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.753745   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.753857   50426 provision.go:143] copyHostCerts
	I1204 20:53:25.753936   50426 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:53:25.753951   50426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:53:25.754033   50426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:53:25.754156   50426 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:53:25.754175   50426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:53:25.754218   50426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:53:25.754365   50426 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:53:25.754377   50426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:53:25.754425   50426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:53:25.754504   50426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.test-preload-464116 san=[127.0.0.1 192.168.39.6 localhost minikube test-preload-464116]
	I1204 20:53:25.891201   50426 provision.go:177] copyRemoteCerts
	I1204 20:53:25.891264   50426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:53:25.891292   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:25.893975   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.894348   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:25.894378   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:25.894662   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:25.894890   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:25.895039   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:25.895242   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:25.973117   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:53:25.997592   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 20:53:26.020601   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:53:26.042127   50426 provision.go:87] duration metric: took 294.179054ms to configureAuth
	I1204 20:53:26.042153   50426 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:53:26.042312   50426 config.go:182] Loaded profile config "test-preload-464116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1204 20:53:26.042378   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:26.044649   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.044935   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.044955   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.045099   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:26.045287   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.045461   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.045618   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:26.045761   50426 main.go:141] libmachine: Using SSH client type: native
	I1204 20:53:26.045904   50426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1204 20:53:26.045918   50426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:53:26.256005   50426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:53:26.256032   50426 machine.go:96] duration metric: took 839.909977ms to provisionDockerMachine
	I1204 20:53:26.256048   50426 start.go:293] postStartSetup for "test-preload-464116" (driver="kvm2")
	I1204 20:53:26.256061   50426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:53:26.256081   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:26.256443   50426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:53:26.256471   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:26.259357   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.259741   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.259769   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.259986   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:26.260167   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.260315   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:26.260421   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:26.337339   50426 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:53:26.341434   50426 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:53:26.341460   50426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:53:26.341521   50426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:53:26.341593   50426 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:53:26.341680   50426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:53:26.350362   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:53:26.372415   50426 start.go:296] duration metric: took 116.351548ms for postStartSetup
	I1204 20:53:26.372464   50426 fix.go:56] duration metric: took 18.433164803s for fixHost
	I1204 20:53:26.372495   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:26.375255   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.375585   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.375615   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.375723   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:26.375929   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.376086   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.376200   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:26.376344   50426 main.go:141] libmachine: Using SSH client type: native
	I1204 20:53:26.376543   50426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1204 20:53:26.376555   50426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:53:26.475864   50426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733345606.449060313
	
	I1204 20:53:26.475902   50426 fix.go:216] guest clock: 1733345606.449060313
	I1204 20:53:26.475909   50426 fix.go:229] Guest: 2024-12-04 20:53:26.449060313 +0000 UTC Remote: 2024-12-04 20:53:26.37247604 +0000 UTC m=+22.855950100 (delta=76.584273ms)
	I1204 20:53:26.475935   50426 fix.go:200] guest clock delta is within tolerance: 76.584273ms
	I1204 20:53:26.475942   50426 start.go:83] releasing machines lock for "test-preload-464116", held for 18.536658756s
	I1204 20:53:26.475966   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:26.476227   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetIP
	I1204 20:53:26.479171   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.479530   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.479551   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.479719   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:26.480155   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:26.480348   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:26.480446   50426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:53:26.480484   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:26.480578   50426 ssh_runner.go:195] Run: cat /version.json
	I1204 20:53:26.480602   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:26.483338   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.483362   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.483722   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.483780   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:26.483811   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.483835   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:26.484105   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:26.484124   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:26.484306   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.484341   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:26.484457   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:26.484474   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:26.484610   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:26.484621   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:26.581809   50426 ssh_runner.go:195] Run: systemctl --version
	I1204 20:53:26.587731   50426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:53:26.735415   50426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:53:26.741061   50426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:53:26.741124   50426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:53:26.757122   50426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:53:26.757144   50426 start.go:495] detecting cgroup driver to use...
	I1204 20:53:26.757208   50426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:53:26.773847   50426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:53:26.787085   50426 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:53:26.787152   50426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:53:26.799743   50426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:53:26.812041   50426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:53:26.918596   50426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:53:27.064324   50426 docker.go:233] disabling docker service ...
	I1204 20:53:27.064405   50426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:53:27.078072   50426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:53:27.090446   50426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:53:27.202863   50426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:53:27.314697   50426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:53:27.328022   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:53:27.344973   50426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1204 20:53:27.345039   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.354332   50426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:53:27.354399   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.363831   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.372923   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.382011   50426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:53:27.391342   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.400647   50426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.416256   50426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:53:27.425717   50426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:53:27.434609   50426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:53:27.434651   50426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:53:27.446976   50426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:53:27.455454   50426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:53:27.566001   50426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:53:27.650472   50426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:53:27.650560   50426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:53:27.654955   50426 start.go:563] Will wait 60s for crictl version
	I1204 20:53:27.655030   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:27.658378   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:53:27.696205   50426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:53:27.696274   50426 ssh_runner.go:195] Run: crio --version
	I1204 20:53:27.725333   50426 ssh_runner.go:195] Run: crio --version
	I1204 20:53:27.756467   50426 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1204 20:53:27.757709   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetIP
	I1204 20:53:27.760129   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:27.760455   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:27.760480   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:27.760693   50426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 20:53:27.764628   50426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:53:27.776080   50426 kubeadm.go:883] updating cluster {Name:test-preload-464116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-464116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:53:27.776195   50426 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1204 20:53:27.776238   50426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:53:27.810210   50426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1204 20:53:27.810280   50426 ssh_runner.go:195] Run: which lz4
	I1204 20:53:27.814089   50426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:53:27.817933   50426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:53:27.817970   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1204 20:53:29.179888   50426 crio.go:462] duration metric: took 1.365835238s to copy over tarball
	I1204 20:53:29.179966   50426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:53:31.478037   50426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.298041303s)
	I1204 20:53:31.478072   50426 crio.go:469] duration metric: took 2.298152783s to extract the tarball
	I1204 20:53:31.478082   50426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:53:31.518364   50426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:53:31.556990   50426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1204 20:53:31.557013   50426 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 20:53:31.557060   50426 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:53:31.557087   50426 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:31.557105   50426 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:31.557128   50426 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:31.557194   50426 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:31.557197   50426 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 20:53:31.557139   50426 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.557146   50426 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:31.558604   50426 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:31.558621   50426 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:53:31.558630   50426 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 20:53:31.558636   50426 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.558640   50426 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:31.558660   50426 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:31.558610   50426 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:31.558694   50426 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:31.695863   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.703722   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 20:53:31.706737   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:31.709390   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:31.723161   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:31.731775   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:31.784787   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:31.802436   50426 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1204 20:53:31.802494   50426 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.802543   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.806503   50426 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1204 20:53:31.806540   50426 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1204 20:53:31.806583   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.847807   50426 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1204 20:53:31.847847   50426 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:31.847895   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.847944   50426 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1204 20:53:31.847983   50426 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:31.848028   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.872006   50426 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1204 20:53:31.872070   50426 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:31.872108   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.880278   50426 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1204 20:53:31.880309   50426 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:31.880318   50426 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1204 20:53:31.880344   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.880346   50426 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:31.880361   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.880378   50426 ssh_runner.go:195] Run: which crictl
	I1204 20:53:31.880462   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1204 20:53:31.880513   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:31.880551   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:31.880574   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:31.884058   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:31.981232   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:31.981278   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:31.993749   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:32.013666   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:32.013759   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1204 20:53:32.013788   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:32.013839   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:32.113593   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:32.113644   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 20:53:32.113687   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 20:53:32.180673   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1204 20:53:32.180697   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1204 20:53:32.180780   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1204 20:53:32.180817   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1204 20:53:32.231033   50426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1204 20:53:32.253950   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 20:53:32.254048   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1204 20:53:32.254068   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 20:53:32.254119   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1204 20:53:32.332026   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1204 20:53:32.332070   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1204 20:53:32.332091   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1204 20:53:32.332155   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 20:53:32.332155   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1204 20:53:32.332157   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1204 20:53:32.332223   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1204 20:53:32.332159   50426 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1204 20:53:32.332280   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1204 20:53:32.332292   50426 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 20:53:32.332297   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1204 20:53:32.332323   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1204 20:53:32.332327   50426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1204 20:53:32.332321   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1204 20:53:32.343264   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1204 20:53:32.343455   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1204 20:53:32.344818   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1204 20:53:32.344866   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1204 20:53:32.345088   50426 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1204 20:53:32.484984   50426 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:53:35.191186   50426 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.858839005s)
	I1204 20:53:35.191226   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 20:53:35.191251   50426 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1204 20:53:35.191255   50426 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.706237812s)
	I1204 20:53:35.191289   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1204 20:53:35.934104   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1204 20:53:35.934146   50426 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1204 20:53:35.934194   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1204 20:53:36.675454   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1204 20:53:36.675506   50426 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1204 20:53:36.675563   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1204 20:53:38.921887   50426 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.246301829s)
	I1204 20:53:38.921916   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1204 20:53:38.921950   50426 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 20:53:38.922026   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1204 20:53:39.065472   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1204 20:53:39.065524   50426 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1204 20:53:39.065580   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1204 20:53:39.507260   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1204 20:53:39.507315   50426 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1204 20:53:39.507369   50426 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1204 20:53:40.349298   50426 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1204 20:53:40.349356   50426 cache_images.go:123] Successfully loaded all cached images
	I1204 20:53:40.349365   50426 cache_images.go:92] duration metric: took 8.792339134s to LoadCachedImages
	I1204 20:53:40.349380   50426 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.24.4 crio true true} ...
	I1204 20:53:40.349514   50426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-464116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-464116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:53:40.349590   50426 ssh_runner.go:195] Run: crio config
	I1204 20:53:40.395240   50426 cni.go:84] Creating CNI manager for ""
	I1204 20:53:40.395261   50426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 20:53:40.395270   50426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:53:40.395292   50426 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-464116 NodeName:test-preload-464116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 20:53:40.395477   50426 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-464116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:53:40.395557   50426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1204 20:53:40.406628   50426 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:53:40.406699   50426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 20:53:40.416258   50426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1204 20:53:40.431944   50426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:53:40.447389   50426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1204 20:53:40.464085   50426 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I1204 20:53:40.467927   50426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:53:40.479968   50426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:53:40.582117   50426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:53:40.598263   50426 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116 for IP: 192.168.39.6
	I1204 20:53:40.598290   50426 certs.go:194] generating shared ca certs ...
	I1204 20:53:40.598313   50426 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:53:40.598459   50426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:53:40.598504   50426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:53:40.598514   50426 certs.go:256] generating profile certs ...
	I1204 20:53:40.598593   50426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/client.key
	I1204 20:53:40.598651   50426 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/apiserver.key.d32fa410
	I1204 20:53:40.598703   50426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/proxy-client.key
	I1204 20:53:40.598856   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:53:40.598891   50426 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:53:40.598901   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:53:40.598938   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:53:40.598964   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:53:40.598985   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:53:40.599022   50426 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:53:40.599698   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:53:40.643920   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:53:40.677772   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:53:40.716855   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:53:40.746785   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 20:53:40.773845   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:53:40.799192   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:53:40.829742   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 20:53:40.851691   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:53:40.873456   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:53:40.895349   50426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:53:40.916943   50426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:53:40.932446   50426 ssh_runner.go:195] Run: openssl version
	I1204 20:53:40.937979   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:53:40.947922   50426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:53:40.951941   50426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:53:40.951987   50426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:53:40.957364   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:53:40.967437   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:53:40.977162   50426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:53:40.981336   50426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:53:40.981380   50426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:53:40.986724   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:53:40.996651   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:53:41.006628   50426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:53:41.010617   50426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:53:41.010672   50426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:53:41.015987   50426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:53:41.025700   50426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:53:41.029699   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 20:53:41.035300   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 20:53:41.040705   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 20:53:41.046371   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 20:53:41.052117   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 20:53:41.057735   50426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 20:53:41.063417   50426 kubeadm.go:392] StartCluster: {Name:test-preload-464116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-464116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:53:41.063500   50426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:53:41.063556   50426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:53:41.103837   50426 cri.go:89] found id: ""
	I1204 20:53:41.103900   50426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:53:41.113675   50426 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 20:53:41.113694   50426 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 20:53:41.113732   50426 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 20:53:41.123074   50426 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 20:53:41.123636   50426 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-464116" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:53:41.123815   50426 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-464116" cluster setting kubeconfig missing "test-preload-464116" context setting]
	I1204 20:53:41.124193   50426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:53:41.125031   50426 kapi.go:59] client config for test-preload-464116: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:53:41.125799   50426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 20:53:41.134963   50426 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I1204 20:53:41.134988   50426 kubeadm.go:1160] stopping kube-system containers ...
	I1204 20:53:41.134999   50426 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 20:53:41.135039   50426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:53:41.169678   50426 cri.go:89] found id: ""
	I1204 20:53:41.169749   50426 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 20:53:41.185205   50426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:53:41.194399   50426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:53:41.194422   50426 kubeadm.go:157] found existing configuration files:
	
	I1204 20:53:41.194466   50426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:53:41.203067   50426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:53:41.203175   50426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:53:41.211895   50426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:53:41.220090   50426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:53:41.220157   50426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:53:41.228633   50426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:53:41.236770   50426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:53:41.236824   50426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:53:41.245183   50426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:53:41.253289   50426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:53:41.253323   50426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:53:41.261630   50426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:53:41.270330   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:41.371330   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:42.248847   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:42.485818   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:42.556241   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:42.628552   50426 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:53:42.628645   50426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:53:43.129618   50426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:53:43.628852   50426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:53:43.661376   50426 api_server.go:72] duration metric: took 1.032821706s to wait for apiserver process to appear ...
	I1204 20:53:43.661407   50426 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:53:43.661431   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:43.661873   50426 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I1204 20:53:44.161663   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:47.177661   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 20:53:47.177693   50426 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 20:53:47.177721   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:47.245340   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 20:53:47.245365   50426 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 20:53:47.661817   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:47.668545   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 20:53:47.668567   50426 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 20:53:48.161578   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:48.168421   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 20:53:48.168450   50426 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 20:53:48.662064   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:53:48.668560   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I1204 20:53:48.677546   50426 api_server.go:141] control plane version: v1.24.4
	I1204 20:53:48.677569   50426 api_server.go:131] duration metric: took 5.016156226s to wait for apiserver health ...
	I1204 20:53:48.677578   50426 cni.go:84] Creating CNI manager for ""
	I1204 20:53:48.677585   50426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 20:53:48.678974   50426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 20:53:48.680352   50426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 20:53:48.692358   50426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 20:53:48.728899   50426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:53:48.728993   50426 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 20:53:48.729014   50426 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 20:53:48.739888   50426 system_pods.go:59] 7 kube-system pods found
	I1204 20:53:48.739928   50426 system_pods.go:61] "coredns-6d4b75cb6d-lcxdx" [953860de-8aa4-41d2-8f9f-768cb9c04979] Running
	I1204 20:53:48.739936   50426 system_pods.go:61] "etcd-test-preload-464116" [e2a67822-2239-47ae-80ae-78c5dfbd2306] Running
	I1204 20:53:48.739942   50426 system_pods.go:61] "kube-apiserver-test-preload-464116" [854cdd95-aed4-44b4-a608-eb73f3a63cc4] Running
	I1204 20:53:48.739953   50426 system_pods.go:61] "kube-controller-manager-test-preload-464116" [a3dc8304-7809-4127-8b21-9793a488cbe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 20:53:48.739962   50426 system_pods.go:61] "kube-proxy-qvlzn" [e200091c-b939-43e1-953e-9fea52c6bc48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 20:53:48.739968   50426 system_pods.go:61] "kube-scheduler-test-preload-464116" [202f6786-3d4c-4015-b122-ab5cb715a14e] Running
	I1204 20:53:48.739989   50426 system_pods.go:61] "storage-provisioner" [d36b4de4-3c6d-4edb-af63-7b6149e2cae1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 20:53:48.740002   50426 system_pods.go:74] duration metric: took 11.078534ms to wait for pod list to return data ...
	I1204 20:53:48.740014   50426 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:53:48.744603   50426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:53:48.744635   50426 node_conditions.go:123] node cpu capacity is 2
	I1204 20:53:48.744649   50426 node_conditions.go:105] duration metric: took 4.626731ms to run NodePressure ...
	I1204 20:53:48.744674   50426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 20:53:49.013552   50426 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 20:53:49.017985   50426 kubeadm.go:739] kubelet initialised
	I1204 20:53:49.018016   50426 kubeadm.go:740] duration metric: took 4.437001ms waiting for restarted kubelet to initialise ...
	I1204 20:53:49.018027   50426 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:53:49.022757   50426 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.030530   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.030558   50426 pod_ready.go:82] duration metric: took 7.77767ms for pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.030570   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.030579   50426 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.035785   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "etcd-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.035811   50426 pod_ready.go:82] duration metric: took 5.220402ms for pod "etcd-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.035824   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "etcd-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.035834   50426 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.041263   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "kube-apiserver-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.041286   50426 pod_ready.go:82] duration metric: took 5.439362ms for pod "kube-apiserver-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.041298   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "kube-apiserver-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.041306   50426 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.133185   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.133224   50426 pod_ready.go:82] duration metric: took 91.905386ms for pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.133237   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.133246   50426 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qvlzn" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.532689   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "kube-proxy-qvlzn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.532719   50426 pod_ready.go:82] duration metric: took 399.462516ms for pod "kube-proxy-qvlzn" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.532731   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "kube-proxy-qvlzn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.532740   50426 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:49.932762   50426 pod_ready.go:98] node "test-preload-464116" hosting pod "kube-scheduler-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.932791   50426 pod_ready.go:82] duration metric: took 400.043262ms for pod "kube-scheduler-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	E1204 20:53:49.932802   50426 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-464116" hosting pod "kube-scheduler-test-preload-464116" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:49.932812   50426 pod_ready.go:39] duration metric: took 914.774533ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:53:49.932833   50426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 20:53:49.944435   50426 ops.go:34] apiserver oom_adj: -16
	I1204 20:53:49.944455   50426 kubeadm.go:597] duration metric: took 8.830756045s to restartPrimaryControlPlane
	I1204 20:53:49.944466   50426 kubeadm.go:394] duration metric: took 8.881054222s to StartCluster
	I1204 20:53:49.944503   50426 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:53:49.944589   50426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:53:49.945492   50426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:53:49.945777   50426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:53:49.945895   50426 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 20:53:49.945998   50426 addons.go:69] Setting storage-provisioner=true in profile "test-preload-464116"
	I1204 20:53:49.946006   50426 config.go:182] Loaded profile config "test-preload-464116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1204 20:53:49.946020   50426 addons.go:234] Setting addon storage-provisioner=true in "test-preload-464116"
	W1204 20:53:49.946034   50426 addons.go:243] addon storage-provisioner should already be in state true
	I1204 20:53:49.946066   50426 host.go:66] Checking if "test-preload-464116" exists ...
	I1204 20:53:49.946017   50426 addons.go:69] Setting default-storageclass=true in profile "test-preload-464116"
	I1204 20:53:49.946114   50426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-464116"
	I1204 20:53:49.946532   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:49.946556   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:49.946579   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:49.946601   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:49.947690   50426 out.go:177] * Verifying Kubernetes components...
	I1204 20:53:49.949340   50426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:53:49.961235   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1204 20:53:49.961766   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:49.962090   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1204 20:53:49.962336   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:49.962359   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:49.962467   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:49.962715   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:49.963011   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:49.963032   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:49.963327   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:49.963335   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:49.963405   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:49.963536   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetState
	I1204 20:53:49.965737   50426 kapi.go:59] client config for test-preload-464116: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/client.crt", KeyFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/profiles/test-preload-464116/client.key", CAFile:"/home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 20:53:49.966012   50426 addons.go:234] Setting addon default-storageclass=true in "test-preload-464116"
	W1204 20:53:49.966029   50426 addons.go:243] addon default-storageclass should already be in state true
	I1204 20:53:49.966049   50426 host.go:66] Checking if "test-preload-464116" exists ...
	I1204 20:53:49.966379   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:49.966421   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:49.978140   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1204 20:53:49.978614   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:49.979120   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:49.979145   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:49.979502   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:49.979704   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetState
	I1204 20:53:49.980524   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I1204 20:53:49.980855   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:49.981344   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:49.981374   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:49.981407   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:49.981642   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:49.982086   50426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:53:49.982119   50426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:53:49.983245   50426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:53:49.984582   50426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:53:49.984607   50426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 20:53:49.984624   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:49.987402   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:49.987888   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:49.987925   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:49.988068   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:49.988220   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:49.988391   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:49.988533   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:50.018677   50426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1204 20:53:50.019210   50426 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:53:50.019833   50426 main.go:141] libmachine: Using API Version  1
	I1204 20:53:50.019860   50426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:53:50.020173   50426 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:53:50.020335   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetState
	I1204 20:53:50.021852   50426 main.go:141] libmachine: (test-preload-464116) Calling .DriverName
	I1204 20:53:50.022080   50426 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 20:53:50.022097   50426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 20:53:50.022115   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHHostname
	I1204 20:53:50.024876   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:50.025371   50426 main.go:141] libmachine: (test-preload-464116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:b1:3e", ip: ""} in network mk-test-preload-464116: {Iface:virbr1 ExpiryTime:2024-12-04 21:53:18 +0000 UTC Type:0 Mac:52:54:00:5d:b1:3e Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-464116 Clientid:01:52:54:00:5d:b1:3e}
	I1204 20:53:50.025399   50426 main.go:141] libmachine: (test-preload-464116) DBG | domain test-preload-464116 has defined IP address 192.168.39.6 and MAC address 52:54:00:5d:b1:3e in network mk-test-preload-464116
	I1204 20:53:50.025557   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHPort
	I1204 20:53:50.025753   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHKeyPath
	I1204 20:53:50.025903   50426 main.go:141] libmachine: (test-preload-464116) Calling .GetSSHUsername
	I1204 20:53:50.026022   50426 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/test-preload-464116/id_rsa Username:docker}
	I1204 20:53:50.128368   50426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:53:50.146032   50426 node_ready.go:35] waiting up to 6m0s for node "test-preload-464116" to be "Ready" ...
	I1204 20:53:50.201638   50426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 20:53:50.296662   50426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 20:53:51.197087   50426 main.go:141] libmachine: Making call to close driver server
	I1204 20:53:51.197115   50426 main.go:141] libmachine: (test-preload-464116) Calling .Close
	I1204 20:53:51.197423   50426 main.go:141] libmachine: (test-preload-464116) DBG | Closing plugin on server side
	I1204 20:53:51.197429   50426 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:53:51.197451   50426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:53:51.197466   50426 main.go:141] libmachine: Making call to close driver server
	I1204 20:53:51.197478   50426 main.go:141] libmachine: (test-preload-464116) Calling .Close
	I1204 20:53:51.197704   50426 main.go:141] libmachine: (test-preload-464116) DBG | Closing plugin on server side
	I1204 20:53:51.197753   50426 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:53:51.197765   50426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:53:51.203316   50426 main.go:141] libmachine: Making call to close driver server
	I1204 20:53:51.203342   50426 main.go:141] libmachine: (test-preload-464116) Calling .Close
	I1204 20:53:51.203623   50426 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:53:51.203644   50426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:53:51.203648   50426 main.go:141] libmachine: (test-preload-464116) DBG | Closing plugin on server side
	I1204 20:53:51.211148   50426 main.go:141] libmachine: Making call to close driver server
	I1204 20:53:51.211167   50426 main.go:141] libmachine: (test-preload-464116) Calling .Close
	I1204 20:53:51.211428   50426 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:53:51.211443   50426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:53:51.211458   50426 main.go:141] libmachine: Making call to close driver server
	I1204 20:53:51.211455   50426 main.go:141] libmachine: (test-preload-464116) DBG | Closing plugin on server side
	I1204 20:53:51.211466   50426 main.go:141] libmachine: (test-preload-464116) Calling .Close
	I1204 20:53:51.211701   50426 main.go:141] libmachine: Successfully made call to close driver server
	I1204 20:53:51.211718   50426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 20:53:51.214236   50426 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 20:53:51.215302   50426 addons.go:510] duration metric: took 1.269419569s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 20:53:52.150313   50426 node_ready.go:53] node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:54.650996   50426 node_ready.go:53] node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:57.149928   50426 node_ready.go:53] node "test-preload-464116" has status "Ready":"False"
	I1204 20:53:57.649358   50426 node_ready.go:49] node "test-preload-464116" has status "Ready":"True"
	I1204 20:53:57.649403   50426 node_ready.go:38] duration metric: took 7.503342033s for node "test-preload-464116" to be "Ready" ...
	I1204 20:53:57.649416   50426 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:53:57.654278   50426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:57.660825   50426 pod_ready.go:93] pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace has status "Ready":"True"
	I1204 20:53:57.660843   50426 pod_ready.go:82] duration metric: took 6.539726ms for pod "coredns-6d4b75cb6d-lcxdx" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:57.660851   50426 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:58.667043   50426 pod_ready.go:93] pod "etcd-test-preload-464116" in "kube-system" namespace has status "Ready":"True"
	I1204 20:53:58.667067   50426 pod_ready.go:82] duration metric: took 1.006208793s for pod "etcd-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:58.667082   50426 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:58.672138   50426 pod_ready.go:93] pod "kube-apiserver-test-preload-464116" in "kube-system" namespace has status "Ready":"True"
	I1204 20:53:58.672157   50426 pod_ready.go:82] duration metric: took 5.068537ms for pod "kube-apiserver-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:53:58.672166   50426 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:00.677741   50426 pod_ready.go:103] pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace has status "Ready":"False"
	I1204 20:54:01.179727   50426 pod_ready.go:93] pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace has status "Ready":"True"
	I1204 20:54:01.179750   50426 pod_ready.go:82] duration metric: took 2.507577725s for pod "kube-controller-manager-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:01.179760   50426 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qvlzn" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:01.184651   50426 pod_ready.go:93] pod "kube-proxy-qvlzn" in "kube-system" namespace has status "Ready":"True"
	I1204 20:54:01.184669   50426 pod_ready.go:82] duration metric: took 4.903488ms for pod "kube-proxy-qvlzn" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:01.184676   50426 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:01.249465   50426 pod_ready.go:93] pod "kube-scheduler-test-preload-464116" in "kube-system" namespace has status "Ready":"True"
	I1204 20:54:01.249487   50426 pod_ready.go:82] duration metric: took 64.805954ms for pod "kube-scheduler-test-preload-464116" in "kube-system" namespace to be "Ready" ...
	I1204 20:54:01.249498   50426 pod_ready.go:39] duration metric: took 3.600070457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 20:54:01.249511   50426 api_server.go:52] waiting for apiserver process to appear ...
	I1204 20:54:01.249580   50426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:54:01.264037   50426 api_server.go:72] duration metric: took 11.318216853s to wait for apiserver process to appear ...
	I1204 20:54:01.264064   50426 api_server.go:88] waiting for apiserver healthz status ...
	I1204 20:54:01.264082   50426 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1204 20:54:01.268994   50426 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I1204 20:54:01.269833   50426 api_server.go:141] control plane version: v1.24.4
	I1204 20:54:01.269852   50426 api_server.go:131] duration metric: took 5.78174ms to wait for apiserver health ...
	I1204 20:54:01.269859   50426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 20:54:01.452363   50426 system_pods.go:59] 7 kube-system pods found
	I1204 20:54:01.452390   50426 system_pods.go:61] "coredns-6d4b75cb6d-lcxdx" [953860de-8aa4-41d2-8f9f-768cb9c04979] Running
	I1204 20:54:01.452395   50426 system_pods.go:61] "etcd-test-preload-464116" [e2a67822-2239-47ae-80ae-78c5dfbd2306] Running
	I1204 20:54:01.452403   50426 system_pods.go:61] "kube-apiserver-test-preload-464116" [854cdd95-aed4-44b4-a608-eb73f3a63cc4] Running
	I1204 20:54:01.452406   50426 system_pods.go:61] "kube-controller-manager-test-preload-464116" [a3dc8304-7809-4127-8b21-9793a488cbe9] Running
	I1204 20:54:01.452409   50426 system_pods.go:61] "kube-proxy-qvlzn" [e200091c-b939-43e1-953e-9fea52c6bc48] Running
	I1204 20:54:01.452412   50426 system_pods.go:61] "kube-scheduler-test-preload-464116" [202f6786-3d4c-4015-b122-ab5cb715a14e] Running
	I1204 20:54:01.452415   50426 system_pods.go:61] "storage-provisioner" [d36b4de4-3c6d-4edb-af63-7b6149e2cae1] Running
	I1204 20:54:01.452421   50426 system_pods.go:74] duration metric: took 182.557271ms to wait for pod list to return data ...
	I1204 20:54:01.452430   50426 default_sa.go:34] waiting for default service account to be created ...
	I1204 20:54:01.649531   50426 default_sa.go:45] found service account: "default"
	I1204 20:54:01.649554   50426 default_sa.go:55] duration metric: took 197.118834ms for default service account to be created ...
	I1204 20:54:01.649562   50426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 20:54:01.851781   50426 system_pods.go:86] 7 kube-system pods found
	I1204 20:54:01.851808   50426 system_pods.go:89] "coredns-6d4b75cb6d-lcxdx" [953860de-8aa4-41d2-8f9f-768cb9c04979] Running
	I1204 20:54:01.851813   50426 system_pods.go:89] "etcd-test-preload-464116" [e2a67822-2239-47ae-80ae-78c5dfbd2306] Running
	I1204 20:54:01.851816   50426 system_pods.go:89] "kube-apiserver-test-preload-464116" [854cdd95-aed4-44b4-a608-eb73f3a63cc4] Running
	I1204 20:54:01.851820   50426 system_pods.go:89] "kube-controller-manager-test-preload-464116" [a3dc8304-7809-4127-8b21-9793a488cbe9] Running
	I1204 20:54:01.851823   50426 system_pods.go:89] "kube-proxy-qvlzn" [e200091c-b939-43e1-953e-9fea52c6bc48] Running
	I1204 20:54:01.851826   50426 system_pods.go:89] "kube-scheduler-test-preload-464116" [202f6786-3d4c-4015-b122-ab5cb715a14e] Running
	I1204 20:54:01.851828   50426 system_pods.go:89] "storage-provisioner" [d36b4de4-3c6d-4edb-af63-7b6149e2cae1] Running
	I1204 20:54:01.851834   50426 system_pods.go:126] duration metric: took 202.267179ms to wait for k8s-apps to be running ...
	I1204 20:54:01.851841   50426 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 20:54:01.851882   50426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:54:01.866101   50426 system_svc.go:56] duration metric: took 14.253371ms WaitForService to wait for kubelet
	I1204 20:54:01.866125   50426 kubeadm.go:582] duration metric: took 11.920310786s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 20:54:01.866142   50426 node_conditions.go:102] verifying NodePressure condition ...
	I1204 20:54:02.049438   50426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 20:54:02.049463   50426 node_conditions.go:123] node cpu capacity is 2
	I1204 20:54:02.049472   50426 node_conditions.go:105] duration metric: took 183.324665ms to run NodePressure ...
	I1204 20:54:02.049489   50426 start.go:241] waiting for startup goroutines ...
	I1204 20:54:02.049497   50426 start.go:246] waiting for cluster config update ...
	I1204 20:54:02.049509   50426 start.go:255] writing updated cluster config ...
	I1204 20:54:02.049793   50426 ssh_runner.go:195] Run: rm -f paused
	I1204 20:54:02.096039   50426 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1204 20:54:02.098065   50426 out.go:201] 
	W1204 20:54:02.099899   50426 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1204 20:54:02.101269   50426 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1204 20:54:02.102403   50426 out.go:177] * Done! kubectl is now configured to use "test-preload-464116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.951748938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733345642951727115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc8a832a-965f-4059-a4a2-d613cb85c47c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.952398193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=534178bb-b390-4ebd-9de3-5b0227d70304 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.952492186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=534178bb-b390-4ebd-9de3-5b0227d70304 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.952675504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b27917fe0fd411231e81d2a3862bbe3374cfb456da72931aa84fa3b4f8985d0,PodSandboxId:ddd39a25c604d660c2795c4c909dd3ec4703ccb56e7d0ce382bfde08ed85c169,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733345635719603041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lcxdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953860de-8aa4-41d2-8f9f-768cb9c04979,},Annotations:map[string]string{io.kubernetes.container.hash: 125f6f2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974daaf87b10fe9cc0a1fd3aea6c8e64fb7c8b479b55f39872687d7fc4b83472,PodSandboxId:316e621cdfce96b17ebc9bf9b948adec97833a52f234e8757d43998fda0057f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733345628618685488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvlzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e200091c-b939-43e1-953e-9fea52c6bc48,},Annotations:map[string]string{io.kubernetes.container.hash: 283fc85f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4b2587f9dfa13075dd96d28a5910d4db3d39cddefeccdd9696dc94f485942,PodSandboxId:9494300f05fee98001bda21835a0e665b628028978c07af34304c699622539cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733345628368184645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6b4de4-3c6d-4edb-af63-7b6149e2cae1,},Annotations:map[string]string{io.kubernetes.container.hash: c2c5aaa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ff69749a0b5942c9f55961d3e77d02514daf9b92283d0e3c944eb66a139a24,PodSandboxId:9d5e91f644ac08fa442ffc3844fe579913af61ef305addf418719b01b2e264f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733345623412348633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9617599f43c4ffd15618c854bb3eb0,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3d70d633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d074999e0a78382fa55da068ca955948fec6e36b255eab80935ec9a99b8ba255,PodSandboxId:e017ae70b60e8e6b9c6b77b02f34069906c1f254a9f4fac168b10846555a06cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733345623368917885,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76f0a47ff2dc101b33a8e8
c1aaa3a0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eccc47841b4dbfb57e3cc9a47f7f23f6bc2f1883dd272350deb61f019479f5,PodSandboxId:1cbc536c3908ff0a5e8cd503e11511c4041489390ba04c986c4415bf61021792,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733345623364598665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e99c67cb84217a1db1514bd9457987,}
,Annotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b9b9431640117672ac37a388147731bba88f457f9668bfb55997690433cc2ba,PodSandboxId:89b63615576eb05f2a5baa0f44dea2befd83f3562f3747bed7247522062dffa3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733345623286999388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196675999f267da92d855950038eeb13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=534178bb-b390-4ebd-9de3-5b0227d70304 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.987027806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93ad97d9-c2fc-4e15-8bfa-23984bcc30b7 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.987576708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93ad97d9-c2fc-4e15-8bfa-23984bcc30b7 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.988559089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93c12cbc-2dec-4537-8a3d-9f74641e4f30 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.988984431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733345642988961352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93c12cbc-2dec-4537-8a3d-9f74641e4f30 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.989460490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd3fe5a5-5a52-4407-9acd-387c6ac049f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.989513226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd3fe5a5-5a52-4407-9acd-387c6ac049f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:02 test-preload-464116 crio[667]: time="2024-12-04 20:54:02.989696665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b27917fe0fd411231e81d2a3862bbe3374cfb456da72931aa84fa3b4f8985d0,PodSandboxId:ddd39a25c604d660c2795c4c909dd3ec4703ccb56e7d0ce382bfde08ed85c169,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733345635719603041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lcxdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953860de-8aa4-41d2-8f9f-768cb9c04979,},Annotations:map[string]string{io.kubernetes.container.hash: 125f6f2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974daaf87b10fe9cc0a1fd3aea6c8e64fb7c8b479b55f39872687d7fc4b83472,PodSandboxId:316e621cdfce96b17ebc9bf9b948adec97833a52f234e8757d43998fda0057f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733345628618685488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvlzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e200091c-b939-43e1-953e-9fea52c6bc48,},Annotations:map[string]string{io.kubernetes.container.hash: 283fc85f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4b2587f9dfa13075dd96d28a5910d4db3d39cddefeccdd9696dc94f485942,PodSandboxId:9494300f05fee98001bda21835a0e665b628028978c07af34304c699622539cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733345628368184645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6b4de4-3c6d-4edb-af63-7b6149e2cae1,},Annotations:map[string]string{io.kubernetes.container.hash: c2c5aaa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ff69749a0b5942c9f55961d3e77d02514daf9b92283d0e3c944eb66a139a24,PodSandboxId:9d5e91f644ac08fa442ffc3844fe579913af61ef305addf418719b01b2e264f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733345623412348633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9617599f43c4ffd15618c854bb3eb0,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3d70d633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d074999e0a78382fa55da068ca955948fec6e36b255eab80935ec9a99b8ba255,PodSandboxId:e017ae70b60e8e6b9c6b77b02f34069906c1f254a9f4fac168b10846555a06cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733345623368917885,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76f0a47ff2dc101b33a8e8
c1aaa3a0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eccc47841b4dbfb57e3cc9a47f7f23f6bc2f1883dd272350deb61f019479f5,PodSandboxId:1cbc536c3908ff0a5e8cd503e11511c4041489390ba04c986c4415bf61021792,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733345623364598665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e99c67cb84217a1db1514bd9457987,}
,Annotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b9b9431640117672ac37a388147731bba88f457f9668bfb55997690433cc2ba,PodSandboxId:89b63615576eb05f2a5baa0f44dea2befd83f3562f3747bed7247522062dffa3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733345623286999388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196675999f267da92d855950038eeb13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd3fe5a5-5a52-4407-9acd-387c6ac049f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.023859155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a705db89-0c15-4f70-a575-bdef597e1866 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.023932064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a705db89-0c15-4f70-a575-bdef597e1866 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.024824534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a940a0e-cee5-4bad-9190-c69e1a272535 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.025357601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733345643025269583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a940a0e-cee5-4bad-9190-c69e1a272535 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.025974717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=894b957d-c5e3-43cb-aa4c-a81bc15eef06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.026038873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=894b957d-c5e3-43cb-aa4c-a81bc15eef06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.026226490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b27917fe0fd411231e81d2a3862bbe3374cfb456da72931aa84fa3b4f8985d0,PodSandboxId:ddd39a25c604d660c2795c4c909dd3ec4703ccb56e7d0ce382bfde08ed85c169,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733345635719603041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lcxdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953860de-8aa4-41d2-8f9f-768cb9c04979,},Annotations:map[string]string{io.kubernetes.container.hash: 125f6f2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974daaf87b10fe9cc0a1fd3aea6c8e64fb7c8b479b55f39872687d7fc4b83472,PodSandboxId:316e621cdfce96b17ebc9bf9b948adec97833a52f234e8757d43998fda0057f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733345628618685488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvlzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e200091c-b939-43e1-953e-9fea52c6bc48,},Annotations:map[string]string{io.kubernetes.container.hash: 283fc85f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4b2587f9dfa13075dd96d28a5910d4db3d39cddefeccdd9696dc94f485942,PodSandboxId:9494300f05fee98001bda21835a0e665b628028978c07af34304c699622539cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733345628368184645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6b4de4-3c6d-4edb-af63-7b6149e2cae1,},Annotations:map[string]string{io.kubernetes.container.hash: c2c5aaa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ff69749a0b5942c9f55961d3e77d02514daf9b92283d0e3c944eb66a139a24,PodSandboxId:9d5e91f644ac08fa442ffc3844fe579913af61ef305addf418719b01b2e264f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733345623412348633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9617599f43c4ffd15618c854bb3eb0,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3d70d633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d074999e0a78382fa55da068ca955948fec6e36b255eab80935ec9a99b8ba255,PodSandboxId:e017ae70b60e8e6b9c6b77b02f34069906c1f254a9f4fac168b10846555a06cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733345623368917885,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76f0a47ff2dc101b33a8e8
c1aaa3a0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eccc47841b4dbfb57e3cc9a47f7f23f6bc2f1883dd272350deb61f019479f5,PodSandboxId:1cbc536c3908ff0a5e8cd503e11511c4041489390ba04c986c4415bf61021792,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733345623364598665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e99c67cb84217a1db1514bd9457987,}
,Annotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b9b9431640117672ac37a388147731bba88f457f9668bfb55997690433cc2ba,PodSandboxId:89b63615576eb05f2a5baa0f44dea2befd83f3562f3747bed7247522062dffa3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733345623286999388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196675999f267da92d855950038eeb13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=894b957d-c5e3-43cb-aa4c-a81bc15eef06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.057531201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2d62bdc-532a-4b74-aa63-5a6669335335 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.057603392Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2d62bdc-532a-4b74-aa63-5a6669335335 name=/runtime.v1.RuntimeService/Version
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.058485123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45c218a4-1651-4ca0-a56c-48a48096e11c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.058938736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733345643058916356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45c218a4-1651-4ca0-a56c-48a48096e11c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.059395688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dbb8e32-d213-4d34-a851-1bbd293a3af2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.059443843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dbb8e32-d213-4d34-a851-1bbd293a3af2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 20:54:03 test-preload-464116 crio[667]: time="2024-12-04 20:54:03.059603570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b27917fe0fd411231e81d2a3862bbe3374cfb456da72931aa84fa3b4f8985d0,PodSandboxId:ddd39a25c604d660c2795c4c909dd3ec4703ccb56e7d0ce382bfde08ed85c169,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733345635719603041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-lcxdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953860de-8aa4-41d2-8f9f-768cb9c04979,},Annotations:map[string]string{io.kubernetes.container.hash: 125f6f2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974daaf87b10fe9cc0a1fd3aea6c8e64fb7c8b479b55f39872687d7fc4b83472,PodSandboxId:316e621cdfce96b17ebc9bf9b948adec97833a52f234e8757d43998fda0057f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733345628618685488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvlzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e200091c-b939-43e1-953e-9fea52c6bc48,},Annotations:map[string]string{io.kubernetes.container.hash: 283fc85f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59d4b2587f9dfa13075dd96d28a5910d4db3d39cddefeccdd9696dc94f485942,PodSandboxId:9494300f05fee98001bda21835a0e665b628028978c07af34304c699622539cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733345628368184645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6b4de4-3c6d-4edb-af63-7b6149e2cae1,},Annotations:map[string]string{io.kubernetes.container.hash: c2c5aaa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ff69749a0b5942c9f55961d3e77d02514daf9b92283d0e3c944eb66a139a24,PodSandboxId:9d5e91f644ac08fa442ffc3844fe579913af61ef305addf418719b01b2e264f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733345623412348633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9617599f43c4ffd15618c854bb3eb0,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3d70d633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d074999e0a78382fa55da068ca955948fec6e36b255eab80935ec9a99b8ba255,PodSandboxId:e017ae70b60e8e6b9c6b77b02f34069906c1f254a9f4fac168b10846555a06cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733345623368917885,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76f0a47ff2dc101b33a8e8
c1aaa3a0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eccc47841b4dbfb57e3cc9a47f7f23f6bc2f1883dd272350deb61f019479f5,PodSandboxId:1cbc536c3908ff0a5e8cd503e11511c4041489390ba04c986c4415bf61021792,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733345623364598665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e99c67cb84217a1db1514bd9457987,}
,Annotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b9b9431640117672ac37a388147731bba88f457f9668bfb55997690433cc2ba,PodSandboxId:89b63615576eb05f2a5baa0f44dea2befd83f3562f3747bed7247522062dffa3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733345623286999388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-464116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196675999f267da92d855950038eeb13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dbb8e32-d213-4d34-a851-1bbd293a3af2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b27917fe0fd4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   ddd39a25c604d       coredns-6d4b75cb6d-lcxdx
	974daaf87b10f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   316e621cdfce9       kube-proxy-qvlzn
	59d4b2587f9df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   9494300f05fee       storage-provisioner
	b5ff69749a0b5       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   9d5e91f644ac0       etcd-test-preload-464116
	d074999e0a783       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   e017ae70b60e8       kube-controller-manager-test-preload-464116
	14eccc47841b4       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   1cbc536c3908f       kube-apiserver-test-preload-464116
	3b9b943164011       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   89b63615576eb       kube-scheduler-test-preload-464116
	
	
	==> coredns [2b27917fe0fd411231e81d2a3862bbe3374cfb456da72931aa84fa3b4f8985d0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:45540 - 52549 "HINFO IN 5369157419880331076.7427348210941578776. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.070488681s
	
	
	==> describe nodes <==
	Name:               test-preload-464116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-464116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=test-preload-464116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T20_52_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:52:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-464116
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:53:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:53:57 +0000   Wed, 04 Dec 2024 20:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:53:57 +0000   Wed, 04 Dec 2024 20:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:53:57 +0000   Wed, 04 Dec 2024 20:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:53:57 +0000   Wed, 04 Dec 2024 20:53:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    test-preload-464116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 15f0df8c997e4049a3a6a77f40eb938d
	  System UUID:                15f0df8c-997e-4049-a3a6-a77f40eb938d
	  Boot ID:                    f971e392-5c73-4ccb-839e-075fae2536b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lcxdx                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-test-preload-464116                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-464116             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-464116    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-qvlzn                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-464116             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-464116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-464116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-464116 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet          Node test-preload-464116 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node test-preload-464116 event: Registered Node test-preload-464116 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-464116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-464116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-464116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-464116 event: Registered Node test-preload-464116 in Controller
	
	
	==> dmesg <==
	[Dec 4 20:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052445] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037310] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.823017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.908425] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586232] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.222805] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.058604] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047821] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.182080] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.110052] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.252643] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[ +13.012707] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.058035] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.830980] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +5.897663] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.706724] systemd-fstab-generator[1734]: Ignoring "noauto" option for root device
	[  +5.532247] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [b5ff69749a0b5942c9f55961d3e77d02514daf9b92283d0e3c944eb66a139a24] <==
	{"level":"info","ts":"2024-12-04T20:53:43.729Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6f26d2d338759d80","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-04T20:53:43.736Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-04T20:53:43.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 switched to configuration voters=(8009320791952170368)"}
	{"level":"info","ts":"2024-12-04T20:53:43.736Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","added-peer-id":"6f26d2d338759d80","added-peer-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2024-12-04T20:53:43.738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:53:43.738Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:53:43.741Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T20:53:43.742Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T20:53:43.742Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T20:53:43.743Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-12-04T20:53:43.743Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 3"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2024-12-04T20:53:44.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-12-04T20:53:44.803Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:test-preload-464116 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T20:53:44.803Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:53:44.803Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:53:44.805Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2024-12-04T20:53:44.805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T20:53:44.805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T20:53:44.807Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:54:03 up 0 min,  0 users,  load average: 0.60, 0.17, 0.06
	Linux test-preload-464116 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14eccc47841b4dbfb57e3cc9a47f7f23f6bc2f1883dd272350deb61f019479f5] <==
	I1204 20:53:47.151021       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1204 20:53:47.195566       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1204 20:53:47.151043       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1204 20:53:47.151376       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1204 20:53:47.151446       1 available_controller.go:491] Starting AvailableConditionController
	I1204 20:53:47.199743       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	E1204 20:53:47.258330       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1204 20:53:47.263365       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1204 20:53:47.269725       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1204 20:53:47.269809       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1204 20:53:47.269860       1 cache.go:39] Caches are synced for autoregister controller
	I1204 20:53:47.295591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 20:53:47.321018       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1204 20:53:47.339758       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1204 20:53:47.339979       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 20:53:47.836081       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1204 20:53:48.157688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 20:53:48.892894       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1204 20:53:48.907166       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1204 20:53:48.949490       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1204 20:53:48.972022       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 20:53:48.972865       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1204 20:53:48.985528       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 20:54:00.315665       1 controller.go:611] quota admission added evaluator for: endpoints
	I1204 20:54:00.372463       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d074999e0a78382fa55da068ca955948fec6e36b255eab80935ec9a99b8ba255] <==
	W1204 20:54:00.315034       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-464116" does not exist
	I1204 20:54:00.338471       1 shared_informer.go:262] Caches are synced for persistent volume
	I1204 20:54:00.358067       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1204 20:54:00.359892       1 shared_informer.go:262] Caches are synced for cronjob
	I1204 20:54:00.360953       1 shared_informer.go:262] Caches are synced for taint
	I1204 20:54:00.361139       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1204 20:54:00.361235       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1204 20:54:00.361423       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-464116. Assuming now as a timestamp.
	I1204 20:54:00.361469       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1204 20:54:00.361650       1 event.go:294] "Event occurred" object="test-preload-464116" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-464116 event: Registered Node test-preload-464116 in Controller"
	I1204 20:54:00.387232       1 shared_informer.go:262] Caches are synced for TTL
	I1204 20:54:00.387354       1 shared_informer.go:262] Caches are synced for attach detach
	I1204 20:54:00.388537       1 shared_informer.go:262] Caches are synced for daemon sets
	I1204 20:54:00.402828       1 shared_informer.go:262] Caches are synced for GC
	I1204 20:54:00.408110       1 shared_informer.go:262] Caches are synced for node
	I1204 20:54:00.408190       1 range_allocator.go:173] Starting range CIDR allocator
	I1204 20:54:00.408216       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1204 20:54:00.408279       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1204 20:54:00.435099       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 20:54:00.486935       1 shared_informer.go:262] Caches are synced for service account
	I1204 20:54:00.512578       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 20:54:00.546429       1 shared_informer.go:262] Caches are synced for namespace
	I1204 20:54:00.937389       1 shared_informer.go:262] Caches are synced for garbage collector
	I1204 20:54:00.937473       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1204 20:54:00.950743       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [974daaf87b10fe9cc0a1fd3aea6c8e64fb7c8b479b55f39872687d7fc4b83472] <==
	I1204 20:53:48.863900       1 node.go:163] Successfully retrieved node IP: 192.168.39.6
	I1204 20:53:48.864080       1 server_others.go:138] "Detected node IP" address="192.168.39.6"
	I1204 20:53:48.864179       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1204 20:53:48.958407       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1204 20:53:48.958436       1 server_others.go:206] "Using iptables Proxier"
	I1204 20:53:48.959624       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1204 20:53:48.960704       1 server.go:661] "Version info" version="v1.24.4"
	I1204 20:53:48.960732       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:53:48.964990       1 config.go:317] "Starting service config controller"
	I1204 20:53:48.965061       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1204 20:53:48.965088       1 config.go:226] "Starting endpoint slice config controller"
	I1204 20:53:48.965102       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1204 20:53:48.967466       1 config.go:444] "Starting node config controller"
	I1204 20:53:48.967486       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1204 20:53:49.065256       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1204 20:53:49.065388       1 shared_informer.go:262] Caches are synced for service config
	I1204 20:53:49.067593       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3b9b9431640117672ac37a388147731bba88f457f9668bfb55997690433cc2ba] <==
	I1204 20:53:44.317562       1 serving.go:348] Generated self-signed cert in-memory
	W1204 20:53:47.178640       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 20:53:47.178783       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 20:53:47.178858       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 20:53:47.178884       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 20:53:47.251733       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1204 20:53:47.251811       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:53:47.261851       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1204 20:53:47.262951       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 20:53:47.263048       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:53:47.263225       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1204 20:53:47.363488       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.627995    1117 apiserver.go:52] "Watching apiserver"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.631841    1117 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.632079    1117 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.632177    1117 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: E1204 20:53:47.634120    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-lcxdx" podUID=953860de-8aa4-41d2-8f9f-768cb9c04979
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: E1204 20:53:47.676798    1117 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689719    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdh8\" (UniqueName: \"kubernetes.io/projected/e200091c-b939-43e1-953e-9fea52c6bc48-kube-api-access-9hdh8\") pod \"kube-proxy-qvlzn\" (UID: \"e200091c-b939-43e1-953e-9fea52c6bc48\") " pod="kube-system/kube-proxy-qvlzn"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689779    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e200091c-b939-43e1-953e-9fea52c6bc48-xtables-lock\") pod \"kube-proxy-qvlzn\" (UID: \"e200091c-b939-43e1-953e-9fea52c6bc48\") " pod="kube-system/kube-proxy-qvlzn"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689806    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d36b4de4-3c6d-4edb-af63-7b6149e2cae1-tmp\") pod \"storage-provisioner\" (UID: \"d36b4de4-3c6d-4edb-af63-7b6149e2cae1\") " pod="kube-system/storage-provisioner"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689832    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e200091c-b939-43e1-953e-9fea52c6bc48-lib-modules\") pod \"kube-proxy-qvlzn\" (UID: \"e200091c-b939-43e1-953e-9fea52c6bc48\") " pod="kube-system/kube-proxy-qvlzn"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689856    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tlt8\" (UniqueName: \"kubernetes.io/projected/d36b4de4-3c6d-4edb-af63-7b6149e2cae1-kube-api-access-2tlt8\") pod \"storage-provisioner\" (UID: \"d36b4de4-3c6d-4edb-af63-7b6149e2cae1\") " pod="kube-system/storage-provisioner"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689880    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e200091c-b939-43e1-953e-9fea52c6bc48-kube-proxy\") pod \"kube-proxy-qvlzn\" (UID: \"e200091c-b939-43e1-953e-9fea52c6bc48\") " pod="kube-system/kube-proxy-qvlzn"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689898    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume\") pod \"coredns-6d4b75cb6d-lcxdx\" (UID: \"953860de-8aa4-41d2-8f9f-768cb9c04979\") " pod="kube-system/coredns-6d4b75cb6d-lcxdx"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689923    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h7cb\" (UniqueName: \"kubernetes.io/projected/953860de-8aa4-41d2-8f9f-768cb9c04979-kube-api-access-9h7cb\") pod \"coredns-6d4b75cb6d-lcxdx\" (UID: \"953860de-8aa4-41d2-8f9f-768cb9c04979\") " pod="kube-system/coredns-6d4b75cb6d-lcxdx"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: I1204 20:53:47.689937    1117 reconciler.go:159] "Reconciler: start to sync state"
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: E1204 20:53:47.795807    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 04 20:53:47 test-preload-464116 kubelet[1117]: E1204 20:53:47.795910    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume podName:953860de-8aa4-41d2-8f9f-768cb9c04979 nodeName:}" failed. No retries permitted until 2024-12-04 20:53:48.295876258 +0000 UTC m=+5.818635262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume") pod "coredns-6d4b75cb6d-lcxdx" (UID: "953860de-8aa4-41d2-8f9f-768cb9c04979") : object "kube-system"/"coredns" not registered
	Dec 04 20:53:48 test-preload-464116 kubelet[1117]: E1204 20:53:48.298188    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 04 20:53:48 test-preload-464116 kubelet[1117]: E1204 20:53:48.298265    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume podName:953860de-8aa4-41d2-8f9f-768cb9c04979 nodeName:}" failed. No retries permitted until 2024-12-04 20:53:49.298249956 +0000 UTC m=+6.821008944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume") pod "coredns-6d4b75cb6d-lcxdx" (UID: "953860de-8aa4-41d2-8f9f-768cb9c04979") : object "kube-system"/"coredns" not registered
	Dec 04 20:53:49 test-preload-464116 kubelet[1117]: E1204 20:53:49.303230    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 04 20:53:49 test-preload-464116 kubelet[1117]: E1204 20:53:49.303345    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume podName:953860de-8aa4-41d2-8f9f-768cb9c04979 nodeName:}" failed. No retries permitted until 2024-12-04 20:53:51.303329391 +0000 UTC m=+8.826088384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume") pod "coredns-6d4b75cb6d-lcxdx" (UID: "953860de-8aa4-41d2-8f9f-768cb9c04979") : object "kube-system"/"coredns" not registered
	Dec 04 20:53:49 test-preload-464116 kubelet[1117]: E1204 20:53:49.711768    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-lcxdx" podUID=953860de-8aa4-41d2-8f9f-768cb9c04979
	Dec 04 20:53:51 test-preload-464116 kubelet[1117]: E1204 20:53:51.318375    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 04 20:53:51 test-preload-464116 kubelet[1117]: E1204 20:53:51.318520    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume podName:953860de-8aa4-41d2-8f9f-768cb9c04979 nodeName:}" failed. No retries permitted until 2024-12-04 20:53:55.318484709 +0000 UTC m=+12.841243713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/953860de-8aa4-41d2-8f9f-768cb9c04979-config-volume") pod "coredns-6d4b75cb6d-lcxdx" (UID: "953860de-8aa4-41d2-8f9f-768cb9c04979") : object "kube-system"/"coredns" not registered
	Dec 04 20:53:51 test-preload-464116 kubelet[1117]: E1204 20:53:51.712266    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-lcxdx" podUID=953860de-8aa4-41d2-8f9f-768cb9c04979
	
	
	==> storage-provisioner [59d4b2587f9dfa13075dd96d28a5910d4db3d39cddefeccdd9696dc94f485942] <==
	I1204 20:53:48.443544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-464116 -n test-preload-464116
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-464116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-464116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-464116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-464116: (1.090542191s)
--- FAIL: TestPreload (168.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m37.27979059s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-697588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-697588" primary control-plane node in "kubernetes-upgrade-697588" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:58:18.675067   54052 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:58:18.675187   54052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:58:18.675197   54052 out.go:358] Setting ErrFile to fd 2...
	I1204 20:58:18.675201   54052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:58:18.675383   54052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:58:18.675952   54052 out.go:352] Setting JSON to false
	I1204 20:58:18.676812   54052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6049,"bootTime":1733339850,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:58:18.676897   54052 start.go:139] virtualization: kvm guest
	I1204 20:58:18.678779   54052 out.go:177] * [kubernetes-upgrade-697588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:58:18.679918   54052 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:58:18.679925   54052 notify.go:220] Checking for updates...
	I1204 20:58:18.682104   54052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:58:18.683154   54052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:58:18.684210   54052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:58:18.685361   54052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:58:18.686436   54052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:58:18.687897   54052 config.go:182] Loaded profile config "NoKubernetes-863313": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1204 20:58:18.688053   54052 config.go:182] Loaded profile config "cert-expiration-994058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:58:18.688149   54052 config.go:182] Loaded profile config "running-upgrade-033002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1204 20:58:18.688218   54052 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:58:18.727417   54052 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 20:58:18.728699   54052 start.go:297] selected driver: kvm2
	I1204 20:58:18.728717   54052 start.go:901] validating driver "kvm2" against <nil>
	I1204 20:58:18.728731   54052 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:58:18.729751   54052 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:58:18.729846   54052 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 20:58:18.744888   54052 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 20:58:18.744926   54052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 20:58:18.745170   54052 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 20:58:18.745196   54052 cni.go:84] Creating CNI manager for ""
	I1204 20:58:18.745236   54052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 20:58:18.745244   54052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 20:58:18.745286   54052 start.go:340] cluster config:
	{Name:kubernetes-upgrade-697588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:58:18.745373   54052 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 20:58:18.746819   54052 out.go:177] * Starting "kubernetes-upgrade-697588" primary control-plane node in "kubernetes-upgrade-697588" cluster
	I1204 20:58:18.748055   54052 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 20:58:18.748088   54052 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 20:58:18.748105   54052 cache.go:56] Caching tarball of preloaded images
	I1204 20:58:18.748183   54052 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 20:58:18.748193   54052 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1204 20:58:18.748268   54052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/config.json ...
	I1204 20:58:18.748284   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/config.json: {Name:mk0bb342c81e9dfc1b5a89aff0c18e03f7286a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:18.748403   54052 start.go:360] acquireMachinesLock for kubernetes-upgrade-697588: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 20:58:26.388859   54052 start.go:364] duration metric: took 7.640415511s to acquireMachinesLock for "kubernetes-upgrade-697588"
	I1204 20:58:26.388933   54052 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-697588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 20:58:26.389049   54052 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 20:58:26.391332   54052 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 20:58:26.391563   54052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:58:26.391621   54052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:58:26.410405   54052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I1204 20:58:26.411761   54052 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:58:26.412353   54052 main.go:141] libmachine: Using API Version  1
	I1204 20:58:26.412377   54052 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:58:26.412745   54052 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:58:26.412910   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetMachineName
	I1204 20:58:26.413028   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:26.413177   54052 start.go:159] libmachine.API.Create for "kubernetes-upgrade-697588" (driver="kvm2")
	I1204 20:58:26.413216   54052 client.go:168] LocalClient.Create starting
	I1204 20:58:26.413257   54052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 20:58:26.413313   54052 main.go:141] libmachine: Decoding PEM data...
	I1204 20:58:26.413337   54052 main.go:141] libmachine: Parsing certificate...
	I1204 20:58:26.413410   54052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 20:58:26.413441   54052 main.go:141] libmachine: Decoding PEM data...
	I1204 20:58:26.413457   54052 main.go:141] libmachine: Parsing certificate...
	I1204 20:58:26.413497   54052 main.go:141] libmachine: Running pre-create checks...
	I1204 20:58:26.413517   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .PreCreateCheck
	I1204 20:58:26.413906   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetConfigRaw
	I1204 20:58:26.414381   54052 main.go:141] libmachine: Creating machine...
	I1204 20:58:26.414401   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .Create
	I1204 20:58:26.414580   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Creating KVM machine...
	I1204 20:58:26.416053   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found existing default KVM network
	I1204 20:58:26.417630   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.417469   54151 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7c:22:d1} reservation:<nil>}
	I1204 20:58:26.418517   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.418335   54151 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:a9:51} reservation:<nil>}
	I1204 20:58:26.419575   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.419467   54151 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:b8:5f} reservation:<nil>}
	I1204 20:58:26.420892   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.420798   54151 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003811c0}
	I1204 20:58:26.420926   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | created network xml: 
	I1204 20:58:26.420940   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | <network>
	I1204 20:58:26.420954   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   <name>mk-kubernetes-upgrade-697588</name>
	I1204 20:58:26.420967   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   <dns enable='no'/>
	I1204 20:58:26.420977   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   
	I1204 20:58:26.420989   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1204 20:58:26.421010   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |     <dhcp>
	I1204 20:58:26.421021   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1204 20:58:26.421030   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |     </dhcp>
	I1204 20:58:26.421039   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   </ip>
	I1204 20:58:26.421047   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG |   
	I1204 20:58:26.421059   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | </network>
	I1204 20:58:26.421069   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | 
	I1204 20:58:26.426422   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | trying to create private KVM network mk-kubernetes-upgrade-697588 192.168.72.0/24...
	I1204 20:58:26.509937   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | private KVM network mk-kubernetes-upgrade-697588 192.168.72.0/24 created
	I1204 20:58:26.510012   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.509901   54151 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:58:26.510037   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588 ...
	I1204 20:58:26.510052   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 20:58:26.510069   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 20:58:26.801264   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:26.801042   54151 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa...
	I1204 20:58:27.001012   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:27.000869   54151 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/kubernetes-upgrade-697588.rawdisk...
	I1204 20:58:27.001047   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Writing magic tar header
	I1204 20:58:27.001145   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Writing SSH key tar header
	I1204 20:58:27.001184   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:27.001040   54151 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588 ...
	I1204 20:58:27.001215   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588 (perms=drwx------)
	I1204 20:58:27.001246   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 20:58:27.001268   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588
	I1204 20:58:27.001286   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 20:58:27.001300   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:58:27.001310   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 20:58:27.001335   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 20:58:27.001359   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 20:58:27.001374   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 20:58:27.001386   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 20:58:27.001413   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home/jenkins
	I1204 20:58:27.001427   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 20:58:27.001441   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Creating domain...
	I1204 20:58:27.001452   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Checking permissions on dir: /home
	I1204 20:58:27.001463   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Skipping /home - not owner
	I1204 20:58:27.002460   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) define libvirt domain using xml: 
	I1204 20:58:27.002478   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) <domain type='kvm'>
	I1204 20:58:27.002494   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <name>kubernetes-upgrade-697588</name>
	I1204 20:58:27.002502   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <memory unit='MiB'>2200</memory>
	I1204 20:58:27.002512   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <vcpu>2</vcpu>
	I1204 20:58:27.002523   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <features>
	I1204 20:58:27.002534   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <acpi/>
	I1204 20:58:27.002545   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <apic/>
	I1204 20:58:27.002556   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <pae/>
	I1204 20:58:27.002566   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     
	I1204 20:58:27.002574   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   </features>
	I1204 20:58:27.002583   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <cpu mode='host-passthrough'>
	I1204 20:58:27.002593   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   
	I1204 20:58:27.002600   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   </cpu>
	I1204 20:58:27.002609   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <os>
	I1204 20:58:27.002620   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <type>hvm</type>
	I1204 20:58:27.002632   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <boot dev='cdrom'/>
	I1204 20:58:27.002643   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <boot dev='hd'/>
	I1204 20:58:27.002662   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <bootmenu enable='no'/>
	I1204 20:58:27.002672   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   </os>
	I1204 20:58:27.002682   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   <devices>
	I1204 20:58:27.002693   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <disk type='file' device='cdrom'>
	I1204 20:58:27.002710   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/boot2docker.iso'/>
	I1204 20:58:27.002721   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <target dev='hdc' bus='scsi'/>
	I1204 20:58:27.002735   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <readonly/>
	I1204 20:58:27.002745   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </disk>
	I1204 20:58:27.002759   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <disk type='file' device='disk'>
	I1204 20:58:27.002773   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 20:58:27.002791   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/kubernetes-upgrade-697588.rawdisk'/>
	I1204 20:58:27.002802   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <target dev='hda' bus='virtio'/>
	I1204 20:58:27.002811   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </disk>
	I1204 20:58:27.002834   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <interface type='network'>
	I1204 20:58:27.002855   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <source network='mk-kubernetes-upgrade-697588'/>
	I1204 20:58:27.002867   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <model type='virtio'/>
	I1204 20:58:27.002879   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </interface>
	I1204 20:58:27.002891   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <interface type='network'>
	I1204 20:58:27.002903   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <source network='default'/>
	I1204 20:58:27.002911   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <model type='virtio'/>
	I1204 20:58:27.002924   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </interface>
	I1204 20:58:27.002935   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <serial type='pty'>
	I1204 20:58:27.002948   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <target port='0'/>
	I1204 20:58:27.002958   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </serial>
	I1204 20:58:27.002972   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <console type='pty'>
	I1204 20:58:27.002983   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <target type='serial' port='0'/>
	I1204 20:58:27.002996   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </console>
	I1204 20:58:27.003007   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     <rng model='virtio'>
	I1204 20:58:27.003021   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)       <backend model='random'>/dev/random</backend>
	I1204 20:58:27.003031   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     </rng>
	I1204 20:58:27.003041   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     
	I1204 20:58:27.003051   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)     
	I1204 20:58:27.003061   54052 main.go:141] libmachine: (kubernetes-upgrade-697588)   </devices>
	I1204 20:58:27.003072   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) </domain>
	I1204 20:58:27.003085   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) 
	I1204 20:58:27.007458   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:1b:3f:68 in network default
	I1204 20:58:27.008098   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Ensuring networks are active...
	I1204 20:58:27.008123   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:27.008839   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Ensuring network default is active
	I1204 20:58:27.009222   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Ensuring network mk-kubernetes-upgrade-697588 is active
	I1204 20:58:27.009791   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Getting domain xml...
	I1204 20:58:27.010519   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Creating domain...
	I1204 20:58:28.494118   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Waiting to get IP...
	I1204 20:58:28.495014   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:28.495506   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:28.495536   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:28.495481   54151 retry.go:31] will retry after 296.117107ms: waiting for machine to come up
	I1204 20:58:28.793120   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:28.793807   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:28.793827   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:28.793782   54151 retry.go:31] will retry after 310.191947ms: waiting for machine to come up
	I1204 20:58:29.105232   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:29.105788   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:29.105821   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:29.105709   54151 retry.go:31] will retry after 381.65928ms: waiting for machine to come up
	I1204 20:58:29.870918   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:29.871548   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:29.871576   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:29.871490   54151 retry.go:31] will retry after 609.427874ms: waiting for machine to come up
	I1204 20:58:30.482346   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:30.482911   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:30.482944   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:30.482849   54151 retry.go:31] will retry after 616.56525ms: waiting for machine to come up
	I1204 20:58:31.101734   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:31.102295   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:31.102337   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:31.102233   54151 retry.go:31] will retry after 901.409294ms: waiting for machine to come up
	I1204 20:58:32.005568   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:32.006013   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:32.006045   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:32.005945   54151 retry.go:31] will retry after 839.592295ms: waiting for machine to come up
	I1204 20:58:32.846836   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:32.847269   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:32.847298   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:32.847204   54151 retry.go:31] will retry after 1.349299222s: waiting for machine to come up
	I1204 20:58:34.197999   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:34.198451   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:34.198483   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:34.198402   54151 retry.go:31] will retry after 1.421267891s: waiting for machine to come up
	I1204 20:58:35.620980   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:35.621475   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:35.621505   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:35.621413   54151 retry.go:31] will retry after 2.005053847s: waiting for machine to come up
	I1204 20:58:37.628202   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:37.628714   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:37.628745   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:37.628658   54151 retry.go:31] will retry after 1.986035571s: waiting for machine to come up
	I1204 20:58:39.616049   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:39.616553   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:39.616584   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:39.616491   54151 retry.go:31] will retry after 2.225256782s: waiting for machine to come up
	I1204 20:58:41.843111   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:41.843648   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:41.843674   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:41.843598   54151 retry.go:31] will retry after 3.039737689s: waiting for machine to come up
	I1204 20:58:44.885220   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:44.885709   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find current IP address of domain kubernetes-upgrade-697588 in network mk-kubernetes-upgrade-697588
	I1204 20:58:44.885734   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | I1204 20:58:44.885662   54151 retry.go:31] will retry after 4.833089646s: waiting for machine to come up
	I1204 20:58:49.720593   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.721188   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has current primary IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.721232   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Found IP for machine: 192.168.72.33
	I1204 20:58:49.721258   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Reserving static IP address...
	I1204 20:58:49.721717   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-697588", mac: "52:54:00:71:82:16", ip: "192.168.72.33"} in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.798762   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Reserved static IP address: 192.168.72.33
	I1204 20:58:49.798798   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Getting to WaitForSSH function...
	I1204 20:58:49.798808   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Waiting for SSH to be available...
	I1204 20:58:49.801395   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.801746   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:71:82:16}
	I1204 20:58:49.801781   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.801856   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Using SSH client type: external
	I1204 20:58:49.801884   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa (-rw-------)
	I1204 20:58:49.801934   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 20:58:49.801947   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | About to run SSH command:
	I1204 20:58:49.801956   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | exit 0
	I1204 20:58:49.931674   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | SSH cmd err, output: <nil>: 
	I1204 20:58:49.931937   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) KVM machine creation complete!
	I1204 20:58:49.932269   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetConfigRaw
	I1204 20:58:49.932795   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:49.932983   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:49.933103   54052 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 20:58:49.933118   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetState
	I1204 20:58:49.934554   54052 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 20:58:49.934569   54052 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 20:58:49.934576   54052 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 20:58:49.934584   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:49.937109   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.937445   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:49.937482   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:49.937608   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:49.937784   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:49.937922   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:49.938032   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:49.938181   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:49.938394   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:49.938407   54052 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 20:58:50.046638   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:58:50.046670   54052 main.go:141] libmachine: Detecting the provisioner...
	I1204 20:58:50.046690   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.049717   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.050162   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.050192   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.050348   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:50.050529   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.050672   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.050785   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:50.050906   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:50.051105   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:50.051120   54052 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 20:58:50.159923   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 20:58:50.159988   54052 main.go:141] libmachine: found compatible host: buildroot
	I1204 20:58:50.159994   54052 main.go:141] libmachine: Provisioning with buildroot...
	I1204 20:58:50.160001   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetMachineName
	I1204 20:58:50.160215   54052 buildroot.go:166] provisioning hostname "kubernetes-upgrade-697588"
	I1204 20:58:50.160249   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetMachineName
	I1204 20:58:50.160423   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.163039   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.163312   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.163343   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.163467   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:50.163639   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.163838   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.163980   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:50.164161   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:50.164333   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:50.164348   54052 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-697588 && echo "kubernetes-upgrade-697588" | sudo tee /etc/hostname
	I1204 20:58:50.291946   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-697588
	
	I1204 20:58:50.291980   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.294912   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.295367   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.295425   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.295608   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:50.295787   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.295942   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.296057   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:50.296219   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:50.296393   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:50.296410   54052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-697588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-697588/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-697588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 20:58:50.412968   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 20:58:50.412997   54052 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 20:58:50.413016   54052 buildroot.go:174] setting up certificates
	I1204 20:58:50.413026   54052 provision.go:84] configureAuth start
	I1204 20:58:50.413039   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetMachineName
	I1204 20:58:50.413324   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetIP
	I1204 20:58:50.416078   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.416526   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.416554   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.416727   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.419254   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.419696   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.419740   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.419894   54052 provision.go:143] copyHostCerts
	I1204 20:58:50.419964   54052 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 20:58:50.419978   54052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 20:58:50.420039   54052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 20:58:50.420156   54052 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 20:58:50.420169   54052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 20:58:50.420201   54052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 20:58:50.420282   54052 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 20:58:50.420296   54052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 20:58:50.420326   54052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 20:58:50.420392   54052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-697588 san=[127.0.0.1 192.168.72.33 kubernetes-upgrade-697588 localhost minikube]
	I1204 20:58:50.713403   54052 provision.go:177] copyRemoteCerts
	I1204 20:58:50.713473   54052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 20:58:50.713498   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.716515   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.716856   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.716882   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.717079   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:50.717305   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.717455   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:50.717593   54052 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa Username:docker}
	I1204 20:58:50.802927   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 20:58:50.828239   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1204 20:58:50.850802   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 20:58:50.872899   54052 provision.go:87] duration metric: took 459.860775ms to configureAuth
	I1204 20:58:50.872929   54052 buildroot.go:189] setting minikube options for container-runtime
	I1204 20:58:50.873109   54052 config.go:182] Loaded profile config "kubernetes-upgrade-697588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 20:58:50.873198   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:50.876040   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.876433   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:50.876467   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:50.876644   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:50.876861   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.877021   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:50.877155   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:50.877308   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:50.877526   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:50.877550   54052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 20:58:51.102607   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 20:58:51.102642   54052 main.go:141] libmachine: Checking connection to Docker...
	I1204 20:58:51.102652   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetURL
	I1204 20:58:51.104075   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | Using libvirt version 6000000
	I1204 20:58:51.106139   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.106510   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.106541   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.106643   54052 main.go:141] libmachine: Docker is up and running!
	I1204 20:58:51.106654   54052 main.go:141] libmachine: Reticulating splines...
	I1204 20:58:51.106660   54052 client.go:171] duration metric: took 24.693432132s to LocalClient.Create
	I1204 20:58:51.106684   54052 start.go:167] duration metric: took 24.693508902s to libmachine.API.Create "kubernetes-upgrade-697588"
	I1204 20:58:51.106698   54052 start.go:293] postStartSetup for "kubernetes-upgrade-697588" (driver="kvm2")
	I1204 20:58:51.106712   54052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 20:58:51.106730   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:51.106926   54052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 20:58:51.106950   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:51.109166   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.109503   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.109542   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.109659   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:51.109963   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:51.110114   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:51.110273   54052 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa Username:docker}
	I1204 20:58:51.194201   54052 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 20:58:51.198108   54052 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 20:58:51.198132   54052 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 20:58:51.198195   54052 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 20:58:51.198266   54052 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 20:58:51.198371   54052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 20:58:51.207806   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:58:51.231683   54052 start.go:296] duration metric: took 124.968327ms for postStartSetup
	I1204 20:58:51.231751   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetConfigRaw
	I1204 20:58:51.232353   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetIP
	I1204 20:58:51.235358   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.235730   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.235763   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.236036   54052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/config.json ...
	I1204 20:58:51.236237   54052 start.go:128] duration metric: took 24.847176631s to createHost
	I1204 20:58:51.236266   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:51.238728   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.239050   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.239071   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.239228   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:51.239433   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:51.239607   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:51.239816   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:51.240001   54052 main.go:141] libmachine: Using SSH client type: native
	I1204 20:58:51.240157   54052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I1204 20:58:51.240166   54052 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 20:58:51.351825   54052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733345931.312503897
	
	I1204 20:58:51.351849   54052 fix.go:216] guest clock: 1733345931.312503897
	I1204 20:58:51.351858   54052 fix.go:229] Guest: 2024-12-04 20:58:51.312503897 +0000 UTC Remote: 2024-12-04 20:58:51.236254681 +0000 UTC m=+32.598660844 (delta=76.249216ms)
	I1204 20:58:51.351919   54052 fix.go:200] guest clock delta is within tolerance: 76.249216ms
	I1204 20:58:51.351930   54052 start.go:83] releasing machines lock for "kubernetes-upgrade-697588", held for 24.963035778s
	I1204 20:58:51.351964   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:51.352231   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetIP
	I1204 20:58:51.355193   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.355653   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.355683   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.355866   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:51.356465   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:51.356685   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .DriverName
	I1204 20:58:51.356782   54052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 20:58:51.356841   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:51.356890   54052 ssh_runner.go:195] Run: cat /version.json
	I1204 20:58:51.356917   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHHostname
	I1204 20:58:51.359827   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.359979   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.360238   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.360290   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.360349   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:51.360494   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:51.360519   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:51.360563   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:51.360646   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:51.360687   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHPort
	I1204 20:58:51.360812   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHKeyPath
	I1204 20:58:51.360834   54052 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa Username:docker}
	I1204 20:58:51.360926   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetSSHUsername
	I1204 20:58:51.361053   54052 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/kubernetes-upgrade-697588/id_rsa Username:docker}
	I1204 20:58:51.472479   54052 ssh_runner.go:195] Run: systemctl --version
	I1204 20:58:51.481026   54052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 20:58:51.644812   54052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 20:58:51.652297   54052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 20:58:51.652373   54052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 20:58:51.672438   54052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 20:58:51.672469   54052 start.go:495] detecting cgroup driver to use...
	I1204 20:58:51.672558   54052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 20:58:51.690519   54052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 20:58:51.705062   54052 docker.go:217] disabling cri-docker service (if available) ...
	I1204 20:58:51.705116   54052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 20:58:51.718759   54052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 20:58:51.733680   54052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 20:58:51.854623   54052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 20:58:52.004572   54052 docker.go:233] disabling docker service ...
	I1204 20:58:52.004634   54052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 20:58:52.026480   54052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 20:58:52.039855   54052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 20:58:52.178153   54052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 20:58:52.308553   54052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 20:58:52.321852   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 20:58:52.340065   54052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 20:58:52.340121   54052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:58:52.349714   54052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 20:58:52.349790   54052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:58:52.359258   54052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:58:52.368780   54052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 20:58:52.378151   54052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 20:58:52.390106   54052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 20:58:52.400732   54052 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 20:58:52.400797   54052 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 20:58:52.413397   54052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 20:58:52.424599   54052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:58:52.572276   54052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 20:58:52.679595   54052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 20:58:52.679690   54052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 20:58:52.684346   54052 start.go:563] Will wait 60s for crictl version
	I1204 20:58:52.684401   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:52.688217   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 20:58:52.727179   54052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 20:58:52.727279   54052 ssh_runner.go:195] Run: crio --version
	I1204 20:58:52.756959   54052 ssh_runner.go:195] Run: crio --version
	I1204 20:58:52.791639   54052 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 20:58:52.792888   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) Calling .GetIP
	I1204 20:58:52.796090   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:52.796576   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:82:16", ip: ""} in network mk-kubernetes-upgrade-697588: {Iface:virbr3 ExpiryTime:2024-12-04 21:58:41 +0000 UTC Type:0 Mac:52:54:00:71:82:16 Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:kubernetes-upgrade-697588 Clientid:01:52:54:00:71:82:16}
	I1204 20:58:52.796607   54052 main.go:141] libmachine: (kubernetes-upgrade-697588) DBG | domain kubernetes-upgrade-697588 has defined IP address 192.168.72.33 and MAC address 52:54:00:71:82:16 in network mk-kubernetes-upgrade-697588
	I1204 20:58:52.796817   54052 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 20:58:52.801190   54052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:58:52.815619   54052 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-697588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-697588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 20:58:52.815767   54052 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 20:58:52.815829   54052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:58:52.849749   54052 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 20:58:52.849819   54052 ssh_runner.go:195] Run: which lz4
	I1204 20:58:52.853830   54052 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 20:58:52.858071   54052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 20:58:52.858108   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 20:58:54.382859   54052 crio.go:462] duration metric: took 1.529066411s to copy over tarball
	I1204 20:58:54.382941   54052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 20:58:57.106871   54052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.723901409s)
	I1204 20:58:57.106903   54052 crio.go:469] duration metric: took 2.724015532s to extract the tarball
	I1204 20:58:57.106912   54052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 20:58:57.163225   54052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 20:58:57.217406   54052 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 20:58:57.217428   54052 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 20:58:57.217491   54052 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:58:57.217551   54052 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 20:58:57.217573   54052 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.217594   54052 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.217513   54052 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.217533   54052 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.217558   54052 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.217534   54052 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.219423   54052 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.219440   54052 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.219429   54052 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.219437   54052 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.219567   54052 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:58:57.219512   54052 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.219648   54052 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 20:58:57.219684   54052 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.363293   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.364986   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.369023   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.372017   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.377364   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.387458   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.395740   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 20:58:57.500831   54052 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 20:58:57.500895   54052 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.500942   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.547022   54052 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 20:58:57.547101   54052 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.547032   54052 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 20:58:57.547161   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.547176   54052 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.547221   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.567452   54052 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 20:58:57.567505   54052 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.567554   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.567591   54052 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 20:58:57.567631   54052 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.567688   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.571790   54052 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 20:58:57.571811   54052 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 20:58:57.571837   54052 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 20:58:57.571858   54052 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.571871   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.571897   54052 ssh_runner.go:195] Run: which crictl
	I1204 20:58:57.571909   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.571872   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.572330   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.573553   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.583134   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.689591   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.689638   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 20:58:57.689646   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.689694   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.689738   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.689752   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.694712   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.832092   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 20:58:57.832119   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 20:58:57.832171   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 20:58:57.832210   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 20:58:57.832244   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 20:58:57.832375   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:57.837170   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 20:58:57.970503   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 20:58:57.994110   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 20:58:57.994152   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 20:58:57.994249   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 20:58:57.994308   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 20:58:57.994346   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 20:58:57.994348   54052 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 20:58:58.045290   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 20:58:58.045446   54052 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 20:58:58.279209   54052 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 20:58:58.420109   54052 cache_images.go:92] duration metric: took 1.202661124s to LoadCachedImages
	W1204 20:58:58.420227   54052 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1204 20:58:58.420246   54052 kubeadm.go:934] updating node { 192.168.72.33 8443 v1.20.0 crio true true} ...
	I1204 20:58:58.420399   54052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-697588 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-697588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 20:58:58.420491   54052 ssh_runner.go:195] Run: crio config
	I1204 20:58:58.473341   54052 cni.go:84] Creating CNI manager for ""
	I1204 20:58:58.473370   54052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 20:58:58.473392   54052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 20:58:58.473426   54052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.33 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-697588 NodeName:kubernetes-upgrade-697588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 20:58:58.473568   54052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-697588"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 20:58:58.473631   54052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 20:58:58.486273   54052 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 20:58:58.486366   54052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 20:58:58.497078   54052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1204 20:58:58.516202   54052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 20:58:58.533710   54052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 20:58:58.550524   54052 ssh_runner.go:195] Run: grep 192.168.72.33	control-plane.minikube.internal$ /etc/hosts
	I1204 20:58:58.554228   54052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 20:58:58.566540   54052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 20:58:58.708525   54052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 20:58:58.726348   54052 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588 for IP: 192.168.72.33
	I1204 20:58:58.726382   54052 certs.go:194] generating shared ca certs ...
	I1204 20:58:58.726401   54052 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:58.726571   54052 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 20:58:58.726610   54052 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 20:58:58.726620   54052 certs.go:256] generating profile certs ...
	I1204 20:58:58.726669   54052 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.key
	I1204 20:58:58.726693   54052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.crt with IP's: []
	I1204 20:58:58.942323   54052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.crt ...
	I1204 20:58:58.942354   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.crt: {Name:mk7cfa5b22949de054fa97d5bce7b0e504206381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:58.942571   54052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.key ...
	I1204 20:58:58.942596   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/client.key: {Name:mk84a48a21e3e45ec3900d616c019ca9bedbc9e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:58.942708   54052 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key.7c23b488
	I1204 20:58:58.942729   54052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt.7c23b488 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.33]
	I1204 20:58:59.140704   54052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt.7c23b488 ...
	I1204 20:58:59.140735   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt.7c23b488: {Name:mke9a1981c7b8d6de1b32334277e61b4a131d405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:59.140926   54052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key.7c23b488 ...
	I1204 20:58:59.140947   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key.7c23b488: {Name:mk8535f534ae4a4192cb74a0a2c60c1d24377756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:59.141065   54052 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt.7c23b488 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt
	I1204 20:58:59.141184   54052 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key.7c23b488 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key
	I1204 20:58:59.141275   54052 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.key
	I1204 20:58:59.141295   54052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.crt with IP's: []
	I1204 20:58:59.342458   54052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.crt ...
	I1204 20:58:59.342491   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.crt: {Name:mk4cb84dd525721ec94d18215d03493e22a7c4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:59.342704   54052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.key ...
	I1204 20:58:59.342723   54052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.key: {Name:mkc9cacfc0f7c07d7241159386db806c539a5ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 20:58:59.342973   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 20:58:59.343027   54052 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 20:58:59.343045   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 20:58:59.343086   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 20:58:59.343131   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 20:58:59.343170   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 20:58:59.343232   54052 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 20:58:59.343881   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 20:58:59.371766   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 20:58:59.395745   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 20:58:59.418725   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 20:58:59.441230   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 20:58:59.468609   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 20:58:59.493200   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 20:58:59.526528   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kubernetes-upgrade-697588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 20:58:59.554139   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 20:58:59.579809   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 20:58:59.622032   54052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 20:58:59.655199   54052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 20:58:59.677727   54052 ssh_runner.go:195] Run: openssl version
	I1204 20:58:59.688898   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 20:58:59.702436   54052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:58:59.706918   54052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:58:59.706989   54052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 20:58:59.719043   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 20:58:59.733024   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 20:58:59.747279   54052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 20:58:59.751770   54052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 20:58:59.751832   54052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 20:58:59.757319   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 20:58:59.768377   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 20:58:59.779126   54052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 20:58:59.783727   54052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 20:58:59.783783   54052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 20:58:59.790085   54052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 20:58:59.800868   54052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 20:58:59.804796   54052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 20:58:59.804858   54052 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-697588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-697588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:58:59.804948   54052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 20:58:59.805000   54052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 20:58:59.844704   54052 cri.go:89] found id: ""
	I1204 20:58:59.844782   54052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 20:58:59.856500   54052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 20:58:59.866337   54052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 20:58:59.876571   54052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 20:58:59.876593   54052 kubeadm.go:157] found existing configuration files:
	
	I1204 20:58:59.876634   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 20:58:59.887597   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 20:58:59.887659   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 20:58:59.898970   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 20:58:59.910016   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 20:58:59.910077   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 20:58:59.920744   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 20:58:59.930004   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 20:58:59.930055   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 20:58:59.939246   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 20:58:59.948480   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 20:58:59.948540   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 20:58:59.957541   54052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 20:59:00.081523   54052 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 20:59:00.081639   54052 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 20:59:00.247589   54052 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 20:59:00.247775   54052 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 20:59:00.247931   54052 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 20:59:00.446443   54052 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 20:59:00.448824   54052 out.go:235]   - Generating certificates and keys ...
	I1204 20:59:00.448939   54052 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 20:59:00.449079   54052 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 20:59:00.763284   54052 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 20:59:00.890497   54052 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 20:59:01.021620   54052 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 20:59:01.097626   54052 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 20:59:01.233696   54052 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 20:59:01.233963   54052 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	I1204 20:59:01.491065   54052 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 20:59:01.491314   54052 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	I1204 20:59:01.971400   54052 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 20:59:02.065155   54052 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 20:59:02.313639   54052 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 20:59:02.313950   54052 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 20:59:02.556493   54052 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 20:59:02.682644   54052 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 20:59:02.883852   54052 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 20:59:02.963826   54052 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 20:59:02.983988   54052 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 20:59:02.988850   54052 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 20:59:02.988912   54052 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 20:59:03.119586   54052 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 20:59:03.121477   54052 out.go:235]   - Booting up control plane ...
	I1204 20:59:03.121626   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 20:59:03.129816   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 20:59:03.130896   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 20:59:03.131813   54052 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 20:59:03.135870   54052 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 20:59:43.131133   54052 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 20:59:43.131268   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 20:59:43.131573   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 20:59:48.131255   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 20:59:48.131545   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 20:59:58.130758   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 20:59:58.130954   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:00:18.129945   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:00:18.130250   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:00:58.129556   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:00:58.129830   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:00:58.129845   54052 kubeadm.go:310] 
	I1204 21:00:58.129901   54052 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:00:58.129960   54052 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:00:58.129972   54052 kubeadm.go:310] 
	I1204 21:00:58.130017   54052 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:00:58.130062   54052 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:00:58.130217   54052 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:00:58.130229   54052 kubeadm.go:310] 
	I1204 21:00:58.130364   54052 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:00:58.130418   54052 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:00:58.130459   54052 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:00:58.130474   54052 kubeadm.go:310] 
	I1204 21:00:58.130618   54052 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:00:58.130734   54052 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:00:58.130948   54052 kubeadm.go:310] 
	I1204 21:00:58.131097   54052 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:00:58.131232   54052 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:00:58.131396   54052 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:00:58.131513   54052 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:00:58.131532   54052 kubeadm.go:310] 
	I1204 21:00:58.134219   54052 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:00:58.134371   54052 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:00:58.134476   54052 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:00:58.134646   54052 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-697588 localhost] and IPs [192.168.72.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:00:58.134706   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:00:58.651452   54052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:00:58.672591   54052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:00:58.685861   54052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:00:58.685885   54052 kubeadm.go:157] found existing configuration files:
	
	I1204 21:00:58.685939   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:00:58.698258   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:00:58.698324   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:00:58.711337   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:00:58.724011   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:00:58.724082   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:00:58.736567   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:00:58.748894   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:00:58.748957   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:00:58.761309   54052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:00:58.777471   54052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:00:58.777534   54052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:00:58.789998   54052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:00:58.890258   54052 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:00:58.890458   54052 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:00:59.063697   54052 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:00:59.063939   54052 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:00:59.064143   54052 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:00:59.278871   54052 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:00:59.285397   54052 out.go:235]   - Generating certificates and keys ...
	I1204 21:00:59.285528   54052 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:00:59.285651   54052 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:00:59.285756   54052 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:00:59.285837   54052 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:00:59.285938   54052 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:00:59.286017   54052 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:00:59.286107   54052 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:00:59.286201   54052 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:00:59.286294   54052 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:00:59.286394   54052 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:00:59.286450   54052 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:00:59.286521   54052 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:00:59.379552   54052 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:00:59.727075   54052 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:00:59.869888   54052 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:01:00.023799   54052 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:01:00.047652   54052 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:01:00.048966   54052 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:01:00.049022   54052 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:01:00.217639   54052 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:01:00.219690   54052 out.go:235]   - Booting up control plane ...
	I1204 21:01:00.219829   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:01:00.228183   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:01:00.229614   54052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:01:00.230534   54052 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:01:00.233023   54052 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:01:40.233492   54052 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:01:40.233731   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:01:40.234022   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:01:45.234244   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:01:45.234488   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:01:55.234745   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:01:55.235032   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:02:15.236057   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:02:15.236294   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:02:55.238617   54052 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:02:55.238903   54052 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:02:55.238932   54052 kubeadm.go:310] 
	I1204 21:02:55.238985   54052 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:02:55.239057   54052 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:02:55.239081   54052 kubeadm.go:310] 
	I1204 21:02:55.239137   54052 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:02:55.239189   54052 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:02:55.239337   54052 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:02:55.239360   54052 kubeadm.go:310] 
	I1204 21:02:55.239549   54052 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:02:55.239603   54052 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:02:55.239660   54052 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:02:55.239673   54052 kubeadm.go:310] 
	I1204 21:02:55.239802   54052 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:02:55.239869   54052 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:02:55.239875   54052 kubeadm.go:310] 
	I1204 21:02:55.239970   54052 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:02:55.240038   54052 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:02:55.240095   54052 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:02:55.240152   54052 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:02:55.240156   54052 kubeadm.go:310] 
	I1204 21:02:55.241623   54052 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:02:55.241731   54052 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:02:55.241934   54052 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:02:55.241945   54052 kubeadm.go:394] duration metric: took 3m55.437090458s to StartCluster
	I1204 21:02:55.241999   54052 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:02:55.242077   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:02:55.292767   54052 cri.go:89] found id: ""
	I1204 21:02:55.292801   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.292812   54052 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:02:55.292822   54052 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:02:55.292895   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:02:55.331615   54052 cri.go:89] found id: ""
	I1204 21:02:55.331642   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.331653   54052 logs.go:284] No container was found matching "etcd"
	I1204 21:02:55.331662   54052 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:02:55.331720   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:02:55.376231   54052 cri.go:89] found id: ""
	I1204 21:02:55.376262   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.376272   54052 logs.go:284] No container was found matching "coredns"
	I1204 21:02:55.376280   54052 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:02:55.376349   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:02:55.410173   54052 cri.go:89] found id: ""
	I1204 21:02:55.410206   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.410219   54052 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:02:55.410227   54052 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:02:55.410293   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:02:55.446180   54052 cri.go:89] found id: ""
	I1204 21:02:55.446210   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.446221   54052 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:02:55.446228   54052 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:02:55.446330   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:02:55.487391   54052 cri.go:89] found id: ""
	I1204 21:02:55.487428   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.487441   54052 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:02:55.487450   54052 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:02:55.487536   54052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:02:55.523300   54052 cri.go:89] found id: ""
	I1204 21:02:55.523329   54052 logs.go:282] 0 containers: []
	W1204 21:02:55.523340   54052 logs.go:284] No container was found matching "kindnet"
	I1204 21:02:55.523352   54052 logs.go:123] Gathering logs for kubelet ...
	I1204 21:02:55.523368   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:02:55.590829   54052 logs.go:123] Gathering logs for dmesg ...
	I1204 21:02:55.590867   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:02:55.605769   54052 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:02:55.605796   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:02:55.734117   54052 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:02:55.734144   54052 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:02:55.734160   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:02:55.856862   54052 logs.go:123] Gathering logs for container status ...
	I1204 21:02:55.856897   54052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:02:55.901376   54052 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:02:55.901441   54052 out.go:270] * 
	* 
	W1204 21:02:55.901491   54052 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:02:55.901504   54052 out.go:270] * 
	* 
	W1204 21:02:55.902472   54052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:02:55.905315   54052 out.go:201] 
	W1204 21:02:55.906428   54052 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:02:55.906470   54052 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:02:55.906488   54052 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:02:55.908013   54052 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-697588
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-697588: (1.379176698s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-697588 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-697588 status --format={{.Host}}: exit status 7 (69.160019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.97790021s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-697588 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.976398ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-697588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-697588
	    minikube start -p kubernetes-upgrade-697588 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6975882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-697588 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-697588 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.15572644s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-04 21:04:42.705495466 +0000 UTC m=+4331.805223895
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-697588 -n kubernetes-upgrade-697588
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-697588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-697588 logs -n 25: (2.179876918s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo cat                    | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo cat                    | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo cat                    | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-272234 sudo                        | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-272234                             | custom-flannel-272234 | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC | 04 Dec 24 21:03 UTC |
	| start   | -p flannel-272234                                    | flannel-272234        | jenkins | v1.34.0 | 04 Dec 24 21:03 UTC |                     |
	|         | --memory=3072                                        |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:03:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:03:55.174877   63462 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:03:55.174985   63462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:03:55.174994   63462 out.go:358] Setting ErrFile to fd 2...
	I1204 21:03:55.174998   63462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:03:55.175176   63462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:03:55.175744   63462 out.go:352] Setting JSON to false
	I1204 21:03:55.176854   63462 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6385,"bootTime":1733339850,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:03:55.176951   63462 start.go:139] virtualization: kvm guest
	I1204 21:03:55.179052   63462 out.go:177] * [flannel-272234] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:03:55.180390   63462 notify.go:220] Checking for updates...
	I1204 21:03:55.180415   63462 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:03:55.181811   63462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:03:55.183123   63462 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:03:55.184458   63462 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:03:55.185600   63462 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:03:55.186758   63462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:03:55.188771   63462 config.go:182] Loaded profile config "kindnet-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:03:55.188926   63462 config.go:182] Loaded profile config "kubernetes-upgrade-697588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:03:55.189109   63462 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:03:55.189232   63462 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:03:55.227685   63462 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:03:55.229054   63462 start.go:297] selected driver: kvm2
	I1204 21:03:55.229067   63462 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:03:55.229083   63462 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:03:55.230095   63462 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:03:55.230196   63462 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:03:55.247007   63462 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:03:55.247048   63462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 21:03:55.247289   63462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:03:55.247321   63462 cni.go:84] Creating CNI manager for "flannel"
	I1204 21:03:55.247341   63462 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1204 21:03:55.247435   63462 start.go:340] cluster config:
	{Name:flannel-272234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-272234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:03:55.247535   63462 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:03:55.249316   63462 out.go:177] * Starting "flannel-272234" primary control-plane node in "flannel-272234" cluster
	I1204 21:03:53.629062   59256 main.go:141] libmachine: (pause-998149) Calling .GetIP
	I1204 21:03:53.632408   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:03:53.632836   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:03:53.632862   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:03:53.633053   59256 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:03:53.639590   59256 kubeadm.go:883] updating cluster {Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:03:53.639718   59256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:03:53.639759   59256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:03:53.686162   59256 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:03:53.686189   59256 crio.go:433] Images already preloaded, skipping extraction
	I1204 21:03:53.686239   59256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:03:53.720597   59256 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:03:53.720622   59256 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:03:53.720630   59256 kubeadm.go:934] updating node { 192.168.50.167 8443 v1.31.2 crio true true} ...
	I1204 21:03:53.720734   59256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-998149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:03:53.720793   59256 ssh_runner.go:195] Run: crio config
	I1204 21:03:53.781426   59256 cni.go:84] Creating CNI manager for ""
	I1204 21:03:53.781452   59256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:03:53.781460   59256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:03:53.781481   59256 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-998149 NodeName:pause-998149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:03:53.781622   59256 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-998149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.167"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:03:53.781679   59256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:03:53.793265   59256 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:03:53.793320   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:03:53.804232   59256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 21:03:53.826340   59256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:03:53.843797   59256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1204 21:03:53.860136   59256 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I1204 21:03:53.864026   59256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:03:54.025235   59256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:03:54.043472   59256 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149 for IP: 192.168.50.167
	I1204 21:03:54.043499   59256 certs.go:194] generating shared ca certs ...
	I1204 21:03:54.043519   59256 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:03:54.043674   59256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:03:54.043727   59256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:03:54.043738   59256 certs.go:256] generating profile certs ...
	I1204 21:03:54.043856   59256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/client.key
	I1204 21:03:54.043921   59256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.key.55deb425
	I1204 21:03:54.043972   59256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.key
	I1204 21:03:54.044109   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:03:54.044149   59256 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:03:54.044161   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:03:54.044195   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:03:54.044236   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:03:54.044266   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:03:54.044323   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:03:54.045264   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:03:54.080087   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:03:54.114270   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:03:54.139589   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:03:54.165286   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 21:03:54.189373   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:03:54.214503   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:03:54.238591   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 21:03:54.266627   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:03:54.292137   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:03:54.318582   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:03:54.340562   59256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:03:54.356885   59256 ssh_runner.go:195] Run: openssl version
	I1204 21:03:54.362839   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:03:54.372765   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.376941   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.376984   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.382984   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:03:54.392331   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:03:54.402656   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.406740   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.406784   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.411872   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:03:54.420869   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:03:54.430816   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.434891   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.434938   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.440358   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:03:54.449829   59256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:03:54.455496   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:03:54.461414   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:03:54.466971   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:03:54.472459   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:03:54.477558   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:03:54.482534   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:03:54.487681   59256 kubeadm.go:392] StartCluster: {Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:03:54.487776   59256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:03:54.487853   59256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:03:54.524808   59256 cri.go:89] found id: "89a31cf47f2694fc4436e35415bfc8d832af07231b2b9f384cae00d5e2c05103"
	I1204 21:03:54.524842   59256 cri.go:89] found id: "fd4817756d4f776d96fb4838d0f847a0816c5d318ad4ee6e89a17414763a655d"
	I1204 21:03:54.524849   59256 cri.go:89] found id: "946e3b1886b4a8ed8a95b198a47e96a9b61a4b8c57969924bdce92c758d19a8a"
	I1204 21:03:54.524859   59256 cri.go:89] found id: "07637080c0a1bf8d2ff1dc83ae432c64d5fcf9e99696ed48324c7e7cf46208ce"
	I1204 21:03:54.524865   59256 cri.go:89] found id: "572e04fb7399099549ba8aaf8e68b1891cc5cf4c2e694872642ce58c585b41cd"
	I1204 21:03:54.524871   59256 cri.go:89] found id: "9e09b6a95abf72ba2267add46e4399bc3eeae04bfc1946452df5c8a780093df5"
	I1204 21:03:54.524876   59256 cri.go:89] found id: "8c1a5b072a9a6514932244c0b1b339c5b54458e66175da6e5ceedb907227e17d"
	I1204 21:03:54.524881   59256 cri.go:89] found id: "0c99f1c22873c88653df2c5f92066542afd01c32ea1f8e70f8f34dcd4a613e77"
	I1204 21:03:54.524886   59256 cri.go:89] found id: "5eebddaa2c3bab319d6426c7279a46f21703575ea5f00d89c1ae7db7b1dec3ce"
	I1204 21:03:54.524897   59256 cri.go:89] found id: "c16b7a787f010b1f60d732e09f4ba7dc4e084e26d52424eda53649d9d1d313bf"
	I1204 21:03:54.524907   59256 cri.go:89] found id: "7ea3d31d93f0d0c30546f4d4cb90ce808ffd4fa577dffe12678fb24f12153ca4"
	I1204 21:03:54.524913   59256 cri.go:89] found id: ""
	I1204 21:03:54.524970   59256 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-697588 -n kubernetes-upgrade-697588
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-697588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-697588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-697588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-697588: (1.238862058s)
--- FAIL: TestKubernetesUpgrade (388.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (421.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998149 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-998149 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m57.82535759s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-998149] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-998149" primary control-plane node in "pause-998149" cluster
	* Updating the running kvm2 "pause-998149" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-998149" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:01:51.522809   59256 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:01:51.523065   59256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:01:51.523075   59256 out.go:358] Setting ErrFile to fd 2...
	I1204 21:01:51.523080   59256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:01:51.523311   59256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:01:51.523960   59256 out.go:352] Setting JSON to false
	I1204 21:01:51.525561   59256 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6261,"bootTime":1733339850,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:01:51.525662   59256 start.go:139] virtualization: kvm guest
	I1204 21:01:51.528143   59256 out.go:177] * [pause-998149] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:01:51.529463   59256 notify.go:220] Checking for updates...
	I1204 21:01:51.529469   59256 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:01:51.530974   59256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:01:51.532406   59256 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:01:51.533599   59256 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:01:51.534807   59256 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:01:51.536114   59256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:01:51.538092   59256 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:01:51.538562   59256 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:01:51.538612   59256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:01:51.560428   59256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I1204 21:01:51.561586   59256 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:01:51.563498   59256 main.go:141] libmachine: Using API Version  1
	I1204 21:01:51.563526   59256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:01:51.563879   59256 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:01:51.564149   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:01:51.564468   59256 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:01:51.564909   59256 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:01:51.564962   59256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:01:51.582781   59256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I1204 21:01:51.583419   59256 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:01:51.584023   59256 main.go:141] libmachine: Using API Version  1
	I1204 21:01:51.584055   59256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:01:51.584431   59256 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:01:51.584612   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:01:51.624042   59256 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:01:51.625328   59256 start.go:297] selected driver: kvm2
	I1204 21:01:51.625349   59256 start.go:901] validating driver "kvm2" against &{Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:01:51.625550   59256 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:01:51.625912   59256 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:01:51.626016   59256 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:01:51.645256   59256 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:01:51.646128   59256 cni.go:84] Creating CNI manager for ""
	I1204 21:01:51.646185   59256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:01:51.646236   59256 start.go:340] cluster config:
	{Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-998149 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:01:51.646358   59256 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:01:51.648097   59256 out.go:177] * Starting "pause-998149" primary control-plane node in "pause-998149" cluster
	I1204 21:01:51.649324   59256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:01:51.649383   59256 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:01:51.649396   59256 cache.go:56] Caching tarball of preloaded images
	I1204 21:01:51.649484   59256 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:01:51.649501   59256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:01:51.649690   59256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/config.json ...
	I1204 21:01:51.649950   59256 start.go:360] acquireMachinesLock for pause-998149: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:02:12.111836   59256 start.go:364] duration metric: took 20.461842141s to acquireMachinesLock for "pause-998149"
	I1204 21:02:12.111893   59256 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:02:12.111903   59256 fix.go:54] fixHost starting: 
	I1204 21:02:12.112289   59256 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:02:12.112330   59256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:02:12.132568   59256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1204 21:02:12.132979   59256 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:02:12.133548   59256 main.go:141] libmachine: Using API Version  1
	I1204 21:02:12.133573   59256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:02:12.133942   59256 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:02:12.134189   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:12.134365   59256 main.go:141] libmachine: (pause-998149) Calling .GetState
	I1204 21:02:12.135923   59256 fix.go:112] recreateIfNeeded on pause-998149: state=Running err=<nil>
	W1204 21:02:12.135957   59256 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:02:12.137730   59256 out.go:177] * Updating the running kvm2 "pause-998149" VM ...
	I1204 21:02:12.138852   59256 machine.go:93] provisionDockerMachine start ...
	I1204 21:02:12.138873   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:12.139056   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.141705   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.142170   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.142196   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.142374   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:12.142549   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.142681   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.142823   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:12.142965   59256 main.go:141] libmachine: Using SSH client type: native
	I1204 21:02:12.143174   59256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I1204 21:02:12.143188   59256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:02:12.259865   59256 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-998149
	
	I1204 21:02:12.259899   59256 main.go:141] libmachine: (pause-998149) Calling .GetMachineName
	I1204 21:02:12.260185   59256 buildroot.go:166] provisioning hostname "pause-998149"
	I1204 21:02:12.260218   59256 main.go:141] libmachine: (pause-998149) Calling .GetMachineName
	I1204 21:02:12.260413   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.263513   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.263977   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.264006   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.264282   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:12.264532   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.264742   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.264916   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:12.265075   59256 main.go:141] libmachine: Using SSH client type: native
	I1204 21:02:12.265316   59256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I1204 21:02:12.265336   59256 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-998149 && echo "pause-998149" | sudo tee /etc/hostname
	I1204 21:02:12.395436   59256 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-998149
	
	I1204 21:02:12.395466   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.398315   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.398687   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.398730   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.398922   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:12.399152   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.399400   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.399568   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:12.399758   59256 main.go:141] libmachine: Using SSH client type: native
	I1204 21:02:12.399957   59256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I1204 21:02:12.399974   59256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-998149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-998149/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-998149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:02:12.524043   59256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:02:12.524080   59256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:02:12.524099   59256 buildroot.go:174] setting up certificates
	I1204 21:02:12.524109   59256 provision.go:84] configureAuth start
	I1204 21:02:12.524118   59256 main.go:141] libmachine: (pause-998149) Calling .GetMachineName
	I1204 21:02:12.524456   59256 main.go:141] libmachine: (pause-998149) Calling .GetIP
	I1204 21:02:12.527440   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.527832   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.527852   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.527995   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.530637   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.530954   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.530985   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.531130   59256 provision.go:143] copyHostCerts
	I1204 21:02:12.531210   59256 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:02:12.531236   59256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:02:12.531318   59256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:02:12.531457   59256 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:02:12.531477   59256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:02:12.531513   59256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:02:12.531647   59256 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:02:12.531660   59256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:02:12.531688   59256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:02:12.531753   59256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.pause-998149 san=[127.0.0.1 192.168.50.167 localhost minikube pause-998149]
	I1204 21:02:12.645786   59256 provision.go:177] copyRemoteCerts
	I1204 21:02:12.645860   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:02:12.645884   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.648769   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.649118   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.649154   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.649361   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:12.649553   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.649845   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:12.650000   59256 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/pause-998149/id_rsa Username:docker}
	I1204 21:02:12.737470   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:02:12.761336   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1204 21:02:12.788869   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:02:12.811696   59256 provision.go:87] duration metric: took 287.574234ms to configureAuth
	I1204 21:02:12.811728   59256 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:02:12.811948   59256 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:02:12.812022   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:12.814684   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.815033   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:12.815064   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:12.815266   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:12.815509   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.815678   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:12.815821   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:12.815975   59256 main.go:141] libmachine: Using SSH client type: native
	I1204 21:02:12.816186   59256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I1204 21:02:12.816205   59256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:02:20.332434   59256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:02:20.332476   59256 machine.go:96] duration metric: took 8.193607644s to provisionDockerMachine
	I1204 21:02:20.332491   59256 start.go:293] postStartSetup for "pause-998149" (driver="kvm2")
	I1204 21:02:20.332504   59256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:02:20.332526   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:20.332896   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:02:20.332926   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:20.336114   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.336641   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:20.336681   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.336895   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:20.337103   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:20.337292   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:20.337458   59256 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/pause-998149/id_rsa Username:docker}
	I1204 21:02:20.421667   59256 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:02:20.426091   59256 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:02:20.426115   59256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:02:20.426192   59256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:02:20.426288   59256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:02:20.426398   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:02:20.435460   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:02:20.459068   59256 start.go:296] duration metric: took 126.543398ms for postStartSetup
	I1204 21:02:20.459126   59256 fix.go:56] duration metric: took 8.347221355s for fixHost
	I1204 21:02:20.459153   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:20.462062   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.462495   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:20.462531   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.462695   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:20.462925   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:20.463104   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:20.463246   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:20.463433   59256 main.go:141] libmachine: Using SSH client type: native
	I1204 21:02:20.463657   59256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I1204 21:02:20.463671   59256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:02:20.607635   59256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346140.596153479
	
	I1204 21:02:20.607664   59256 fix.go:216] guest clock: 1733346140.596153479
	I1204 21:02:20.607676   59256 fix.go:229] Guest: 2024-12-04 21:02:20.596153479 +0000 UTC Remote: 2024-12-04 21:02:20.459131926 +0000 UTC m=+28.984929425 (delta=137.021553ms)
	I1204 21:02:20.607703   59256 fix.go:200] guest clock delta is within tolerance: 137.021553ms
	I1204 21:02:20.607709   59256 start.go:83] releasing machines lock for "pause-998149", held for 8.495843105s
	I1204 21:02:20.607738   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:20.608082   59256 main.go:141] libmachine: (pause-998149) Calling .GetIP
	I1204 21:02:20.611683   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.612141   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:20.612170   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.612336   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:20.612959   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:20.613128   59256 main.go:141] libmachine: (pause-998149) Calling .DriverName
	I1204 21:02:20.613210   59256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:02:20.613245   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:20.613367   59256 ssh_runner.go:195] Run: cat /version.json
	I1204 21:02:20.613393   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHHostname
	I1204 21:02:20.616505   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.616619   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.617004   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:20.617037   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:02:20.617062   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.617140   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:02:20.617259   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:20.617473   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:20.617500   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHPort
	I1204 21:02:20.617668   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHKeyPath
	I1204 21:02:20.617717   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:20.617859   59256 main.go:141] libmachine: (pause-998149) Calling .GetSSHUsername
	I1204 21:02:20.617865   59256 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/pause-998149/id_rsa Username:docker}
	I1204 21:02:20.618016   59256 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/pause-998149/id_rsa Username:docker}
	I1204 21:02:20.871645   59256 ssh_runner.go:195] Run: systemctl --version
	I1204 21:02:20.918708   59256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:02:21.253316   59256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:02:21.305037   59256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:02:21.305134   59256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:02:21.341212   59256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 21:02:21.341244   59256 start.go:495] detecting cgroup driver to use...
	I1204 21:02:21.341343   59256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:02:21.376203   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:02:21.418113   59256 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:02:21.418193   59256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:02:21.465319   59256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:02:21.489878   59256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:02:21.764986   59256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:02:21.983983   59256 docker.go:233] disabling docker service ...
	I1204 21:02:21.984046   59256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:02:22.022026   59256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:02:22.062393   59256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:02:22.263819   59256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:02:22.494862   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:02:22.520511   59256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:02:22.553594   59256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:02:22.553668   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.569369   59256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:02:22.569458   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.588457   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.606533   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.621762   59256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:02:22.642551   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.655560   59256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.682505   59256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:02:22.701102   59256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:02:22.715881   59256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:02:22.728950   59256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:02:22.977441   59256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:03:53.500385   59256 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.522887404s)
	I1204 21:03:53.500419   59256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:03:53.500471   59256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:03:53.506596   59256 start.go:563] Will wait 60s for crictl version
	I1204 21:03:53.506673   59256 ssh_runner.go:195] Run: which crictl
	I1204 21:03:53.510707   59256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:03:53.558782   59256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:03:53.558894   59256 ssh_runner.go:195] Run: crio --version
	I1204 21:03:53.591112   59256 ssh_runner.go:195] Run: crio --version
	I1204 21:03:53.627789   59256 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:03:53.629062   59256 main.go:141] libmachine: (pause-998149) Calling .GetIP
	I1204 21:03:53.632408   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:03:53.632836   59256 main.go:141] libmachine: (pause-998149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:3c:cd", ip: ""} in network mk-pause-998149: {Iface:virbr2 ExpiryTime:2024-12-04 22:01:11 +0000 UTC Type:0 Mac:52:54:00:c2:3c:cd Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:pause-998149 Clientid:01:52:54:00:c2:3c:cd}
	I1204 21:03:53.632862   59256 main.go:141] libmachine: (pause-998149) DBG | domain pause-998149 has defined IP address 192.168.50.167 and MAC address 52:54:00:c2:3c:cd in network mk-pause-998149
	I1204 21:03:53.633053   59256 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:03:53.639590   59256 kubeadm.go:883] updating cluster {Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:03:53.639718   59256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:03:53.639759   59256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:03:53.686162   59256 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:03:53.686189   59256 crio.go:433] Images already preloaded, skipping extraction
	I1204 21:03:53.686239   59256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:03:53.720597   59256 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:03:53.720622   59256 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:03:53.720630   59256 kubeadm.go:934] updating node { 192.168.50.167 8443 v1.31.2 crio true true} ...
	I1204 21:03:53.720734   59256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-998149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:03:53.720793   59256 ssh_runner.go:195] Run: crio config
	I1204 21:03:53.781426   59256 cni.go:84] Creating CNI manager for ""
	I1204 21:03:53.781452   59256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:03:53.781460   59256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:03:53.781481   59256 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-998149 NodeName:pause-998149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:03:53.781622   59256 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-998149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.167"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:03:53.781679   59256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:03:53.793265   59256 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:03:53.793320   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:03:53.804232   59256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 21:03:53.826340   59256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:03:53.843797   59256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1204 21:03:53.860136   59256 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I1204 21:03:53.864026   59256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:03:54.025235   59256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:03:54.043472   59256 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149 for IP: 192.168.50.167
	I1204 21:03:54.043499   59256 certs.go:194] generating shared ca certs ...
	I1204 21:03:54.043519   59256 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:03:54.043674   59256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:03:54.043727   59256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:03:54.043738   59256 certs.go:256] generating profile certs ...
	I1204 21:03:54.043856   59256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/client.key
	I1204 21:03:54.043921   59256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.key.55deb425
	I1204 21:03:54.043972   59256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.key
	I1204 21:03:54.044109   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:03:54.044149   59256 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:03:54.044161   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:03:54.044195   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:03:54.044236   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:03:54.044266   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:03:54.044323   59256 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:03:54.045264   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:03:54.080087   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:03:54.114270   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:03:54.139589   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:03:54.165286   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 21:03:54.189373   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:03:54.214503   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:03:54.238591   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/pause-998149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 21:03:54.266627   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:03:54.292137   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:03:54.318582   59256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:03:54.340562   59256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:03:54.356885   59256 ssh_runner.go:195] Run: openssl version
	I1204 21:03:54.362839   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:03:54.372765   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.376941   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.376984   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:03:54.382984   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:03:54.392331   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:03:54.402656   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.406740   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.406784   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:03:54.411872   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:03:54.420869   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:03:54.430816   59256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.434891   59256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.434938   59256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:03:54.440358   59256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:03:54.449829   59256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:03:54.455496   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:03:54.461414   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:03:54.466971   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:03:54.472459   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:03:54.477558   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:03:54.482534   59256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:03:54.487681   59256 kubeadm.go:392] StartCluster: {Name:pause-998149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-998149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:03:54.487776   59256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:03:54.487853   59256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:03:54.524808   59256 cri.go:89] found id: "89a31cf47f2694fc4436e35415bfc8d832af07231b2b9f384cae00d5e2c05103"
	I1204 21:03:54.524842   59256 cri.go:89] found id: "fd4817756d4f776d96fb4838d0f847a0816c5d318ad4ee6e89a17414763a655d"
	I1204 21:03:54.524849   59256 cri.go:89] found id: "946e3b1886b4a8ed8a95b198a47e96a9b61a4b8c57969924bdce92c758d19a8a"
	I1204 21:03:54.524859   59256 cri.go:89] found id: "07637080c0a1bf8d2ff1dc83ae432c64d5fcf9e99696ed48324c7e7cf46208ce"
	I1204 21:03:54.524865   59256 cri.go:89] found id: "572e04fb7399099549ba8aaf8e68b1891cc5cf4c2e694872642ce58c585b41cd"
	I1204 21:03:54.524871   59256 cri.go:89] found id: "9e09b6a95abf72ba2267add46e4399bc3eeae04bfc1946452df5c8a780093df5"
	I1204 21:03:54.524876   59256 cri.go:89] found id: "8c1a5b072a9a6514932244c0b1b339c5b54458e66175da6e5ceedb907227e17d"
	I1204 21:03:54.524881   59256 cri.go:89] found id: "0c99f1c22873c88653df2c5f92066542afd01c32ea1f8e70f8f34dcd4a613e77"
	I1204 21:03:54.524886   59256 cri.go:89] found id: "5eebddaa2c3bab319d6426c7279a46f21703575ea5f00d89c1ae7db7b1dec3ce"
	I1204 21:03:54.524897   59256 cri.go:89] found id: "c16b7a787f010b1f60d732e09f4ba7dc4e084e26d52424eda53649d9d1d313bf"
	I1204 21:03:54.524907   59256 cri.go:89] found id: "7ea3d31d93f0d0c30546f4d4cb90ce808ffd4fa577dffe12678fb24f12153ca4"
	I1204 21:03:54.524913   59256 cri.go:89] found id: ""
	I1204 21:03:54.524970   59256 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-998149 -n pause-998149
E1204 21:08:49.507145   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-998149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-998149 logs -n 25: (1.183982343s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status kubelet --all                       |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat kubelet                                |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | journalctl -xeu kubelet --all                        |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status docker --all                        |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat docker                                 |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/docker/daemon.json                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo docker                         | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | system info                                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status cri-docker                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat cri-docker                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | cri-dockerd --version                                |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status containerd                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat containerd                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                           | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                           | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| delete  | -p bridge-272234                                     | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                | embed-certs-566991 | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                    |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:07:32
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:07:32.271420   72678 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:07:32.271650   72678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:07:32.271658   72678 out.go:358] Setting ErrFile to fd 2...
	I1204 21:07:32.271663   72678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:07:32.271853   72678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:07:32.272400   72678 out.go:352] Setting JSON to false
	I1204 21:07:32.273407   72678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6602,"bootTime":1733339850,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:07:32.273501   72678 start.go:139] virtualization: kvm guest
	I1204 21:07:32.275806   72678 out.go:177] * [embed-certs-566991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:07:32.277553   72678 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:07:32.277560   72678 notify.go:220] Checking for updates...
	I1204 21:07:32.280428   72678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:07:32.281753   72678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:07:32.283168   72678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.284464   72678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:07:32.285658   72678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:07:32.287197   72678 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:32.287322   72678 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:07:32.287476   72678 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:32.287586   72678 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:07:32.324819   72678 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:07:32.326107   72678 start.go:297] selected driver: kvm2
	I1204 21:07:32.326126   72678 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:07:32.326140   72678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:07:32.326855   72678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:07:32.326930   72678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:07:32.341855   72678 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:07:32.341893   72678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 21:07:32.342209   72678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:07:32.342243   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:07:32.342302   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:07:32.342318   72678 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 21:07:32.342385   72678 start.go:340] cluster config:
	{Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:07:32.342514   72678 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:07:32.344321   72678 out.go:177] * Starting "embed-certs-566991" primary control-plane node in "embed-certs-566991" cluster
	I1204 21:07:32.345628   72678 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:32.345658   72678 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:07:32.345666   72678 cache.go:56] Caching tarball of preloaded images
	I1204 21:07:32.345793   72678 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:07:32.345808   72678 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:07:32.345929   72678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:07:32.345954   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json: {Name:mkfcf7510ce9165fe8f524a3bbc4d0f339bc083d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:32.346108   72678 start.go:360] acquireMachinesLock for embed-certs-566991: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:07:32.346152   72678 start.go:364] duration metric: took 26.779µs to acquireMachinesLock for "embed-certs-566991"
	I1204 21:07:32.346185   72678 start.go:93] Provisioning new machine with config: &{Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:07:32.346251   72678 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 21:07:32.237793   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:07:32.240739   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:07:32.241241   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:07:32.241272   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:07:32.241443   71124 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:07:32.245656   71124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:32.257424   71124 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:07:32.257520   71124 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:32.257571   71124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:07:32.290226   71124 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:07:32.290250   71124 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:07:32.290302   71124 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:32.290340   71124 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.290363   71124 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.290384   71124 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.290403   71124 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.290526   71124 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.290556   71124 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.290522   71124 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:07:32.291965   71124 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.291974   71124 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.292041   71124 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.292054   71124 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.292051   71124 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.292202   71124 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:07:32.292377   71124 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.292978   71124 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:32.443658   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.454420   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.472092   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.479946   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.493775   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:07:32.502953   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.509238   71124 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:07:32.509281   71124 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.509329   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.522330   71124 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:07:32.522377   71124 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.522427   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.524563   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.574223   71124 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:07:32.574282   71124 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.574340   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.598135   71124 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I1204 21:07:32.598177   71124 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I1204 21:07:32.598223   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.598243   71124 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:07:32.598276   71124 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.598321   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623468   71124 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:07:32.623511   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.623522   71124 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.623561   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623569   71124 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:07:32.623600   71124 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.623609   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.623630   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623623   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.623511   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.623657   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.720574   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.739257   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.739295   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.739344   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.739393   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.739424   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.739501   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.789308   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.876071   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.876081   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.906987   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.907023   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.907050   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.907163   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.960642   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:07:32.960772   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:32.967765   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:07:32.967851   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:32.967895   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1204 21:07:32.967986   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.021320   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:33.027548   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:07:33.027577   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:33.027609   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:07:33.027632   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:33.027638   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I1204 21:07:33.027669   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I1204 21:07:33.027690   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:33.027696   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.2': No such file or directory
	I1204 21:07:33.027709   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 --> /var/lib/minikube/images/kube-scheduler_v1.31.2 (20112896 bytes)
	I1204 21:07:33.027749   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I1204 21:07:33.027762   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I1204 21:07:33.127291   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:07:33.127367   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.2': No such file or directory
	I1204 21:07:33.127415   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 --> /var/lib/minikube/images/kube-controller-manager_v1.31.2 (26157056 bytes)
	I1204 21:07:33.127309   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:07:33.127431   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.2': No such file or directory
	I1204 21:07:33.127452   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 --> /var/lib/minikube/images/kube-apiserver_v1.31.2 (27981824 bytes)
	I1204 21:07:33.127463   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:33.127545   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:33.164482   71124 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.164532   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.228772   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.242499   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I1204 21:07:33.242550   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I1204 21:07:33.242554   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.2': No such file or directory
	I1204 21:07:33.242583   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 --> /var/lib/minikube/images/kube-proxy_v1.31.2 (30228480 bytes)
	I1204 21:07:33.629094   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I1204 21:07:33.629131   71124 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:07:33.629232   71124 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.629313   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:33.693975   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.795230   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.863790   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.919193   71124 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:33.919266   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:33.951248   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:07:33.951356   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:31.982117   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:33.983468   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:36.484958   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:32.347922   72678 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 21:07:32.348035   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:07:32.348072   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:07:32.363698   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1204 21:07:32.364094   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:07:32.364666   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:07:32.364693   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:07:32.365010   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:07:32.365218   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:32.365377   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:32.365522   72678 start.go:159] libmachine.API.Create for "embed-certs-566991" (driver="kvm2")
	I1204 21:07:32.365553   72678 client.go:168] LocalClient.Create starting
	I1204 21:07:32.365583   72678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 21:07:32.365649   72678 main.go:141] libmachine: Decoding PEM data...
	I1204 21:07:32.365677   72678 main.go:141] libmachine: Parsing certificate...
	I1204 21:07:32.365735   72678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 21:07:32.365765   72678 main.go:141] libmachine: Decoding PEM data...
	I1204 21:07:32.365786   72678 main.go:141] libmachine: Parsing certificate...
	I1204 21:07:32.365810   72678 main.go:141] libmachine: Running pre-create checks...
	I1204 21:07:32.365822   72678 main.go:141] libmachine: (embed-certs-566991) Calling .PreCreateCheck
	I1204 21:07:32.366166   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:32.366581   72678 main.go:141] libmachine: Creating machine...
	I1204 21:07:32.366597   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Create
	I1204 21:07:32.366707   72678 main.go:141] libmachine: (embed-certs-566991) Creating KVM machine...
	I1204 21:07:32.367974   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found existing default KVM network
	I1204 21:07:32.369822   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.369671   72701 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012ffb0}
	I1204 21:07:32.369845   72678 main.go:141] libmachine: (embed-certs-566991) DBG | created network xml: 
	I1204 21:07:32.369858   72678 main.go:141] libmachine: (embed-certs-566991) DBG | <network>
	I1204 21:07:32.369870   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <name>mk-embed-certs-566991</name>
	I1204 21:07:32.369882   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <dns enable='no'/>
	I1204 21:07:32.369886   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   
	I1204 21:07:32.369893   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 21:07:32.369898   72678 main.go:141] libmachine: (embed-certs-566991) DBG |     <dhcp>
	I1204 21:07:32.369908   72678 main.go:141] libmachine: (embed-certs-566991) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 21:07:32.369918   72678 main.go:141] libmachine: (embed-certs-566991) DBG |     </dhcp>
	I1204 21:07:32.369948   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   </ip>
	I1204 21:07:32.369963   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   
	I1204 21:07:32.369968   72678 main.go:141] libmachine: (embed-certs-566991) DBG | </network>
	I1204 21:07:32.369972   72678 main.go:141] libmachine: (embed-certs-566991) DBG | 
	I1204 21:07:32.375270   72678 main.go:141] libmachine: (embed-certs-566991) DBG | trying to create private KVM network mk-embed-certs-566991 192.168.39.0/24...
	I1204 21:07:32.448755   72678 main.go:141] libmachine: (embed-certs-566991) DBG | private KVM network mk-embed-certs-566991 192.168.39.0/24 created
	I1204 21:07:32.448810   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.448694   72701 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.448835   72678 main.go:141] libmachine: (embed-certs-566991) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 ...
	I1204 21:07:32.448851   72678 main.go:141] libmachine: (embed-certs-566991) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 21:07:32.448876   72678 main.go:141] libmachine: (embed-certs-566991) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 21:07:32.696970   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.696810   72701 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa...
	I1204 21:07:32.817894   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.817683   72701 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/embed-certs-566991.rawdisk...
	I1204 21:07:32.817928   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Writing magic tar header
	I1204 21:07:32.817966   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 (perms=drwx------)
	I1204 21:07:32.817986   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Writing SSH key tar header
	I1204 21:07:32.817997   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 21:07:32.818013   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 21:07:32.818024   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 21:07:32.818041   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 21:07:32.818052   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 21:07:32.818062   72678 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:07:32.818088   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.817799   72701 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 ...
	I1204 21:07:32.818108   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991
	I1204 21:07:32.818118   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 21:07:32.818130   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.818142   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 21:07:32.818152   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 21:07:32.818164   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins
	I1204 21:07:32.818172   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home
	I1204 21:07:32.818181   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Skipping /home - not owner
	I1204 21:07:32.819548   72678 main.go:141] libmachine: (embed-certs-566991) define libvirt domain using xml: 
	I1204 21:07:32.819574   72678 main.go:141] libmachine: (embed-certs-566991) <domain type='kvm'>
	I1204 21:07:32.819605   72678 main.go:141] libmachine: (embed-certs-566991)   <name>embed-certs-566991</name>
	I1204 21:07:32.819629   72678 main.go:141] libmachine: (embed-certs-566991)   <memory unit='MiB'>2200</memory>
	I1204 21:07:32.819639   72678 main.go:141] libmachine: (embed-certs-566991)   <vcpu>2</vcpu>
	I1204 21:07:32.819646   72678 main.go:141] libmachine: (embed-certs-566991)   <features>
	I1204 21:07:32.819659   72678 main.go:141] libmachine: (embed-certs-566991)     <acpi/>
	I1204 21:07:32.819667   72678 main.go:141] libmachine: (embed-certs-566991)     <apic/>
	I1204 21:07:32.819675   72678 main.go:141] libmachine: (embed-certs-566991)     <pae/>
	I1204 21:07:32.819692   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.819705   72678 main.go:141] libmachine: (embed-certs-566991)   </features>
	I1204 21:07:32.819713   72678 main.go:141] libmachine: (embed-certs-566991)   <cpu mode='host-passthrough'>
	I1204 21:07:32.819741   72678 main.go:141] libmachine: (embed-certs-566991)   
	I1204 21:07:32.819763   72678 main.go:141] libmachine: (embed-certs-566991)   </cpu>
	I1204 21:07:32.819775   72678 main.go:141] libmachine: (embed-certs-566991)   <os>
	I1204 21:07:32.819786   72678 main.go:141] libmachine: (embed-certs-566991)     <type>hvm</type>
	I1204 21:07:32.819796   72678 main.go:141] libmachine: (embed-certs-566991)     <boot dev='cdrom'/>
	I1204 21:07:32.819803   72678 main.go:141] libmachine: (embed-certs-566991)     <boot dev='hd'/>
	I1204 21:07:32.819815   72678 main.go:141] libmachine: (embed-certs-566991)     <bootmenu enable='no'/>
	I1204 21:07:32.819825   72678 main.go:141] libmachine: (embed-certs-566991)   </os>
	I1204 21:07:32.819834   72678 main.go:141] libmachine: (embed-certs-566991)   <devices>
	I1204 21:07:32.819846   72678 main.go:141] libmachine: (embed-certs-566991)     <disk type='file' device='cdrom'>
	I1204 21:07:32.819863   72678 main.go:141] libmachine: (embed-certs-566991)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/boot2docker.iso'/>
	I1204 21:07:32.819883   72678 main.go:141] libmachine: (embed-certs-566991)       <target dev='hdc' bus='scsi'/>
	I1204 21:07:32.819895   72678 main.go:141] libmachine: (embed-certs-566991)       <readonly/>
	I1204 21:07:32.819902   72678 main.go:141] libmachine: (embed-certs-566991)     </disk>
	I1204 21:07:32.819915   72678 main.go:141] libmachine: (embed-certs-566991)     <disk type='file' device='disk'>
	I1204 21:07:32.819928   72678 main.go:141] libmachine: (embed-certs-566991)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 21:07:32.819944   72678 main.go:141] libmachine: (embed-certs-566991)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/embed-certs-566991.rawdisk'/>
	I1204 21:07:32.819954   72678 main.go:141] libmachine: (embed-certs-566991)       <target dev='hda' bus='virtio'/>
	I1204 21:07:32.819971   72678 main.go:141] libmachine: (embed-certs-566991)     </disk>
	I1204 21:07:32.819988   72678 main.go:141] libmachine: (embed-certs-566991)     <interface type='network'>
	I1204 21:07:32.820010   72678 main.go:141] libmachine: (embed-certs-566991)       <source network='mk-embed-certs-566991'/>
	I1204 21:07:32.820025   72678 main.go:141] libmachine: (embed-certs-566991)       <model type='virtio'/>
	I1204 21:07:32.820037   72678 main.go:141] libmachine: (embed-certs-566991)     </interface>
	I1204 21:07:32.820045   72678 main.go:141] libmachine: (embed-certs-566991)     <interface type='network'>
	I1204 21:07:32.820055   72678 main.go:141] libmachine: (embed-certs-566991)       <source network='default'/>
	I1204 21:07:32.820065   72678 main.go:141] libmachine: (embed-certs-566991)       <model type='virtio'/>
	I1204 21:07:32.820072   72678 main.go:141] libmachine: (embed-certs-566991)     </interface>
	I1204 21:07:32.820081   72678 main.go:141] libmachine: (embed-certs-566991)     <serial type='pty'>
	I1204 21:07:32.820090   72678 main.go:141] libmachine: (embed-certs-566991)       <target port='0'/>
	I1204 21:07:32.820100   72678 main.go:141] libmachine: (embed-certs-566991)     </serial>
	I1204 21:07:32.820109   72678 main.go:141] libmachine: (embed-certs-566991)     <console type='pty'>
	I1204 21:07:32.820127   72678 main.go:141] libmachine: (embed-certs-566991)       <target type='serial' port='0'/>
	I1204 21:07:32.820135   72678 main.go:141] libmachine: (embed-certs-566991)     </console>
	I1204 21:07:32.820141   72678 main.go:141] libmachine: (embed-certs-566991)     <rng model='virtio'>
	I1204 21:07:32.820150   72678 main.go:141] libmachine: (embed-certs-566991)       <backend model='random'>/dev/random</backend>
	I1204 21:07:32.820159   72678 main.go:141] libmachine: (embed-certs-566991)     </rng>
	I1204 21:07:32.820167   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.820176   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.820184   72678 main.go:141] libmachine: (embed-certs-566991)   </devices>
	I1204 21:07:32.820199   72678 main.go:141] libmachine: (embed-certs-566991) </domain>
	I1204 21:07:32.820223   72678 main.go:141] libmachine: (embed-certs-566991) 
	I1204 21:07:32.824986   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:01:44:20 in network default
	I1204 21:07:32.825583   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:07:32.825605   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:32.826327   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:07:32.826686   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:07:32.827234   72678 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:07:32.827980   72678 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:07:34.265308   72678 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:07:34.266035   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.266516   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.266536   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.266487   72701 retry.go:31] will retry after 187.54513ms: waiting for machine to come up
	I1204 21:07:34.456171   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.456813   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.456841   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.456772   72701 retry.go:31] will retry after 265.685765ms: waiting for machine to come up
	I1204 21:07:34.724233   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.724780   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.724821   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.724747   72701 retry.go:31] will retry after 454.103385ms: waiting for machine to come up
	I1204 21:07:35.180435   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:35.181028   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:35.181060   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:35.180955   72701 retry.go:31] will retry after 516.483472ms: waiting for machine to come up
	I1204 21:07:35.700245   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:35.700831   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:35.700872   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:35.700795   72701 retry.go:31] will retry after 472.973695ms: waiting for machine to come up
	I1204 21:07:36.175669   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:36.176290   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:36.176344   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:36.176250   72701 retry.go:31] will retry after 661.57145ms: waiting for machine to come up
	I1204 21:07:36.839157   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:36.839774   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:36.839817   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:36.839720   72701 retry.go:31] will retry after 1.143272503s: waiting for machine to come up
	I1204 21:07:35.341378   69222 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:07:35.341860   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:35.342156   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:36.299101   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.379808646s)
	I1204 21:07:36.299138   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:07:36.299173   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:36.299107   71124 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.347725717s)
	I1204 21:07:36.299268   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 21:07:36.299228   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:36.299296   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1204 21:07:38.482454   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.183125029s)
	I1204 21:07:38.482482   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:07:38.482517   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:38.482586   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:38.981888   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:41.482472   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:37.984732   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:37.985320   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:37.985354   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:37.985275   72701 retry.go:31] will retry after 1.37596792s: waiting for machine to come up
	I1204 21:07:39.362607   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:39.363398   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:39.363425   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:39.363344   72701 retry.go:31] will retry after 1.78102973s: waiting for machine to come up
	I1204 21:07:41.146454   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:41.146959   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:41.146995   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:41.146930   72701 retry.go:31] will retry after 2.214770481s: waiting for machine to come up
	I1204 21:07:40.342053   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:40.342315   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:41.653736   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.171118725s)
	I1204 21:07:41.653763   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:07:41.653791   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:41.653845   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:43.721839   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.067964037s)
	I1204 21:07:43.721873   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:07:43.721904   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:43.721964   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:43.484758   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:45.984747   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:43.363242   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:43.363776   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:43.363800   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:43.363739   72701 retry.go:31] will retry after 2.236559271s: waiting for machine to come up
	I1204 21:07:45.603149   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:45.603670   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:45.603698   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:45.603615   72701 retry.go:31] will retry after 3.480575899s: waiting for machine to come up
	I1204 21:07:45.810467   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.088473048s)
	I1204 21:07:45.810501   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:07:45.810537   71124 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:45.810592   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:49.313452   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.502829566s)
	I1204 21:07:49.313497   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:07:49.313535   71124 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:49.313594   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:50.252374   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:07:50.252416   71124 cache_images.go:123] Successfully loaded all cached images
	I1204 21:07:50.252423   71124 cache_images.go:92] duration metric: took 17.962155884s to LoadCachedImages
	I1204 21:07:50.252438   71124 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:07:50.252543   71124 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:07:50.252621   71124 ssh_runner.go:195] Run: crio config
	I1204 21:07:50.299202   71124 cni.go:84] Creating CNI manager for ""
	I1204 21:07:50.299226   71124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:07:50.299239   71124 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:07:50.299272   71124 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:07:50.299436   71124 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:07:50.299503   71124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:07:50.309119   71124 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 21:07:50.309177   71124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 21:07:50.317882   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 21:07:50.317887   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 21:07:50.317887   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 21:07:50.317933   71124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:07:50.317964   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 21:07:50.318012   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 21:07:50.326955   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 21:07:50.326978   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 21:07:50.326986   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 21:07:50.327000   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 21:07:50.346249   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 21:07:50.374215   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 21:07:50.374268   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 21:07:48.482881   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:50.483797   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:49.086984   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:49.087480   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:49.087515   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:49.087455   72701 retry.go:31] will retry after 4.339629661s: waiting for machine to come up
	I1204 21:07:50.340993   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:50.341244   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:51.013398   71124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:07:51.022174   71124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:07:51.037247   71124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:07:51.051846   71124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:07:51.066590   71124 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:07:51.070007   71124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:51.080925   71124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:07:51.208176   71124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:07:51.226640   71124 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:07:51.226665   71124 certs.go:194] generating shared ca certs ...
	I1204 21:07:51.226686   71124 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.226862   71124 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:07:51.226930   71124 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:07:51.226945   71124 certs.go:256] generating profile certs ...
	I1204 21:07:51.227027   71124 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:07:51.227046   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt with IP's: []
	I1204 21:07:51.479401   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt ...
	I1204 21:07:51.479446   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt: {Name:mkbe81a31e9e5da764b1cfb4f53ad7c67fd65db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.479652   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key ...
	I1204 21:07:51.479667   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key: {Name:mk7319044e22c497ff66a13a403c8664c77accf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.479778   71124 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:07:51.479801   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.174]
	I1204 21:07:51.661031   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 ...
	I1204 21:07:51.661058   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058: {Name:mk5798be682082b116b588db848be2f29f6dbb0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.661250   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058 ...
	I1204 21:07:51.661277   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058: {Name:mka586f5662a3d6038b77f39e21ce05c3b5155a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.661391   71124 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt
	I1204 21:07:51.661525   71124 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key
	I1204 21:07:51.661615   71124 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:07:51.661636   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt with IP's: []
	I1204 21:07:51.755745   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt ...
	I1204 21:07:51.755769   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt: {Name:mk225e8ac907a0feed5a13a1e17fde3e1f0bb7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.755930   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key ...
	I1204 21:07:51.755952   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key: {Name:mk15b6dfbb39e32e2ef3a927c680b599c043b8b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.756162   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:07:51.756215   71124 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:07:51.756230   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:07:51.756259   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:07:51.756286   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:07:51.756328   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:07:51.756379   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:07:51.756983   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:07:51.780146   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:07:51.803093   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:07:51.824025   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:07:51.844884   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:07:51.865816   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:07:51.886448   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:07:51.909749   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:07:51.939145   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:07:51.960052   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:07:51.981998   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:07:52.003624   71124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:07:52.018645   71124 ssh_runner.go:195] Run: openssl version
	I1204 21:07:52.024109   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:07:52.033956   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.038114   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.038171   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.043574   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:07:52.055000   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:07:52.066073   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.070271   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.070319   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.075616   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:07:52.086686   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:07:52.097655   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.101844   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.101888   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.107298   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:07:52.118243   71124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:07:52.122043   71124 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:07:52.122093   71124 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:07:52.122176   71124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:07:52.122215   71124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:07:52.160870   71124 cri.go:89] found id: ""
	I1204 21:07:52.160939   71124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:07:52.171644   71124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:07:52.181884   71124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:07:52.191851   71124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:07:52.191868   71124 kubeadm.go:157] found existing configuration files:
	
	I1204 21:07:52.191902   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:07:52.201357   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:07:52.201394   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:07:52.211036   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:07:52.220393   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:07:52.220428   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:07:52.230114   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:07:52.239405   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:07:52.239450   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:07:52.250007   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:07:52.260371   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:07:52.260407   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:07:52.271483   71124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:07:52.434457   71124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:07:52.982843   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:55.481525   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:53.428465   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.429016   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.429042   72678 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:07:53.429055   72678 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:07:53.429402   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991
	I1204 21:07:53.505112   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:07:53.505140   72678 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:07:53.505151   72678 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:07:53.507624   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.507962   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.507992   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.508260   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:07:53.508286   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:07:53.508325   72678 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:07:53.508337   72678 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:07:53.508364   72678 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:07:53.635681   72678 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:07:53.635958   72678 main.go:141] libmachine: (embed-certs-566991) KVM machine creation complete!
	I1204 21:07:53.636287   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:53.636838   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:53.637018   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:53.637179   72678 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 21:07:53.637194   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:07:53.638668   72678 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 21:07:53.638679   72678 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 21:07:53.638687   72678 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 21:07:53.638693   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.640789   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.641137   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.641169   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.641265   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.641442   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.641603   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.641748   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.641905   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.642141   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.642165   72678 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 21:07:53.746627   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:07:53.746656   72678 main.go:141] libmachine: Detecting the provisioner...
	I1204 21:07:53.746667   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.749446   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.749830   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.749859   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.750001   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.750239   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.750389   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.750540   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.750705   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.750914   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.750931   72678 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 21:07:53.859467   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 21:07:53.859536   72678 main.go:141] libmachine: found compatible host: buildroot
	I1204 21:07:53.859544   72678 main.go:141] libmachine: Provisioning with buildroot...
	I1204 21:07:53.859554   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:53.859819   72678 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:07:53.859848   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:53.860030   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.862726   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.863128   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.863156   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.863305   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.863489   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.863645   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.863762   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.863911   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.864114   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.864128   72678 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:07:53.986418   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:07:53.986449   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.989296   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.989680   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.989710   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.989838   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.990019   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.990193   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.990355   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.990505   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.990660   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.990676   72678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:07:54.108629   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:07:54.108658   72678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:07:54.108683   72678 buildroot.go:174] setting up certificates
	I1204 21:07:54.108695   72678 provision.go:84] configureAuth start
	I1204 21:07:54.108709   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:54.108987   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:54.111741   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.112066   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.112093   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.112254   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.114385   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.114714   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.114740   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.114841   72678 provision.go:143] copyHostCerts
	I1204 21:07:54.114915   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:07:54.114927   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:07:54.115002   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:07:54.115109   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:07:54.115122   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:07:54.115151   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:07:54.115222   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:07:54.115232   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:07:54.115256   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:07:54.115342   72678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:07:54.638161   72678 provision.go:177] copyRemoteCerts
	I1204 21:07:54.638244   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:07:54.638270   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.641133   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.641549   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.641586   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.641874   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:54.642154   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.642373   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:54.642571   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:54.734022   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:07:54.757616   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:07:54.783521   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:07:54.806397   72678 provision.go:87] duration metric: took 697.687725ms to configureAuth
	I1204 21:07:54.806421   72678 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:07:54.806568   72678 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:54.806674   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.809088   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.809457   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.809492   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.809670   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:54.809894   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.810063   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.810228   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:54.810371   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:54.810569   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:54.810590   72678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:07:55.032455   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:07:55.032484   72678 main.go:141] libmachine: Checking connection to Docker...
	I1204 21:07:55.032492   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetURL
	I1204 21:07:55.033731   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using libvirt version 6000000
	I1204 21:07:55.035963   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.036432   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.036464   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.036655   72678 main.go:141] libmachine: Docker is up and running!
	I1204 21:07:55.036672   72678 main.go:141] libmachine: Reticulating splines...
	I1204 21:07:55.036680   72678 client.go:171] duration metric: took 22.671119314s to LocalClient.Create
	I1204 21:07:55.036708   72678 start.go:167] duration metric: took 22.671187588s to libmachine.API.Create "embed-certs-566991"
	I1204 21:07:55.036720   72678 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:07:55.036734   72678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:07:55.036754   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.036973   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:07:55.037004   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.039423   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.039802   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.039828   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.039972   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.040157   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.040351   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.040492   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.128469   72678 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:07:55.132396   72678 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:07:55.132420   72678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:07:55.132489   72678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:07:55.132579   72678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:07:55.132677   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:07:55.143703   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:07:55.168173   72678 start.go:296] duration metric: took 131.441291ms for postStartSetup
	I1204 21:07:55.168218   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:55.168871   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:55.171423   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.171792   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.171820   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.172044   72678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:07:55.172195   72678 start.go:128] duration metric: took 22.825936049s to createHost
	I1204 21:07:55.172220   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.174299   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.174600   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.174630   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.174765   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.174931   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.175076   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.175212   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.175391   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:55.175560   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:55.175573   72678 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:07:55.284235   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346475.253087952
	
	I1204 21:07:55.284263   72678 fix.go:216] guest clock: 1733346475.253087952
	I1204 21:07:55.284271   72678 fix.go:229] Guest: 2024-12-04 21:07:55.253087952 +0000 UTC Remote: 2024-12-04 21:07:55.172208227 +0000 UTC m=+22.938792705 (delta=80.879725ms)
	I1204 21:07:55.284307   72678 fix.go:200] guest clock delta is within tolerance: 80.879725ms
	I1204 21:07:55.284313   72678 start.go:83] releasing machines lock for "embed-certs-566991", held for 22.93815022s
	I1204 21:07:55.284331   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.284585   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:55.287505   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.287883   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.287912   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.288072   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288611   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288792   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288885   72678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:07:55.288927   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.289059   72678 ssh_runner.go:195] Run: cat /version.json
	I1204 21:07:55.289087   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.291824   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.291985   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292289   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.292313   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292489   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.292492   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.292533   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292636   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.292717   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.292787   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.292853   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.292913   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.292978   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.293062   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.396099   72678 ssh_runner.go:195] Run: systemctl --version
	I1204 21:07:55.402972   72678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:07:55.562658   72678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:07:55.568474   72678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:07:55.568540   72678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:07:55.584647   72678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:07:55.584673   72678 start.go:495] detecting cgroup driver to use...
	I1204 21:07:55.584740   72678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:07:55.600252   72678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:07:55.613743   72678 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:07:55.613786   72678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:07:55.626509   72678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:07:55.639200   72678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:07:55.778462   72678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:07:55.945455   72678 docker.go:233] disabling docker service ...
	I1204 21:07:55.945573   72678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:07:55.961181   72678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:07:55.975238   72678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:07:56.093217   72678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:07:56.199474   72678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:07:56.213257   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:07:56.229865   72678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:07:56.229928   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.239401   72678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:07:56.239471   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.248741   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.257662   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.266909   72678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:07:56.276574   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.286881   72678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.302337   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.311648   72678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:07:56.320482   72678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:07:56.320533   72678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:07:56.331964   72678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:07:56.340439   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:07:56.458189   72678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:07:56.544431   72678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:07:56.544512   72678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:07:56.548969   72678 start.go:563] Will wait 60s for crictl version
	I1204 21:07:56.549030   72678 ssh_runner.go:195] Run: which crictl
	I1204 21:07:56.552420   72678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:07:56.596847   72678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:07:56.596931   72678 ssh_runner.go:195] Run: crio --version
	I1204 21:07:56.623175   72678 ssh_runner.go:195] Run: crio --version
	I1204 21:07:56.653217   72678 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:07:56.654549   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:56.657357   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:56.657800   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:56.657830   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:56.658045   72678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:07:56.662600   72678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:56.675067   72678 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:07:56.675194   72678 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:56.675243   72678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:07:56.705386   72678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:07:56.705456   72678 ssh_runner.go:195] Run: which lz4
	I1204 21:07:56.708972   72678 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:07:56.712557   72678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:07:56.712589   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:07:57.486051   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:59.982920   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:58.031687   72678 crio.go:462] duration metric: took 1.322755748s to copy over tarball
	I1204 21:07:58.031816   72678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:08:00.221821   72678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189973077s)
	I1204 21:08:00.221852   72678 crio.go:469] duration metric: took 2.190088595s to extract the tarball
	I1204 21:08:00.221861   72678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:08:00.258969   72678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:08:00.309369   72678 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:08:00.309392   72678 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:08:00.309401   72678 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:08:00.309506   72678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:08:00.309583   72678 ssh_runner.go:195] Run: crio config
	I1204 21:08:00.364859   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:08:00.364884   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:00.364894   72678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:08:00.364913   72678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:08:00.365045   72678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:08:00.365107   72678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:08:00.378637   72678 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:08:00.378701   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:08:00.390059   72678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:08:00.407238   72678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:08:00.425098   72678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:08:00.440975   72678 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:08:00.444968   72678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:08:00.458103   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:00.593263   72678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:00.610519   72678 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:08:00.610605   72678 certs.go:194] generating shared ca certs ...
	I1204 21:08:00.610636   72678 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.610828   72678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:08:00.610907   72678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:08:00.610925   72678 certs.go:256] generating profile certs ...
	I1204 21:08:00.611007   72678 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:08:00.611034   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt with IP's: []
	I1204 21:08:00.829426   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt ...
	I1204 21:08:00.829463   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt: {Name:mkcde9722eb617fad816565ed778f23f201f6fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.829687   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key ...
	I1204 21:08:00.829710   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key: {Name:mked532e86df206ae013a08249dd6d7514903c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.829852   72678 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:08:00.829869   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.82]
	I1204 21:08:01.094625   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c ...
	I1204 21:08:01.094659   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c: {Name:mk368ae9053be3a68b8c5ccbbe266243b17fe381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.094859   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c ...
	I1204 21:08:01.094872   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c: {Name:mk98e5dedb9e0242cb590a2906638cd85caab4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.094948   72678 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt
	I1204 21:08:01.095016   72678 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key
	I1204 21:08:01.095067   72678 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:08:01.095082   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt with IP's: []
	I1204 21:08:01.406626   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt ...
	I1204 21:08:01.406654   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt: {Name:mk45bc05002f35125be02906e3a69c60af6aa69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.406807   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key ...
	I1204 21:08:01.406819   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key: {Name:mkedcc3254b23883b0b9169eed87c2dd55a2f463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.406997   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:08:01.407035   72678 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:08:01.407042   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:08:01.407065   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:08:01.407088   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:08:01.407109   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:08:01.407172   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:08:01.407742   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:08:01.436386   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:08:01.483134   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:08:01.514729   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:08:01.537462   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:08:01.559233   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:08:01.581383   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:08:01.603979   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:08:01.625787   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:08:01.648024   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:08:01.669938   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:08:01.691584   72678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:08:01.706149   72678 ssh_runner.go:195] Run: openssl version
	I1204 21:08:01.711595   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:08:01.721106   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.725176   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.725236   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.730590   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:08:01.740950   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:08:01.751485   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.755757   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.755813   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.761084   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:08:01.771275   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:08:01.781543   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.785774   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.785844   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.791134   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:08:01.801528   72678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:08:01.805262   72678 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:08:01.805340   72678 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:08:01.805421   72678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:08:01.805468   72678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:08:01.842287   72678 cri.go:89] found id: ""
	I1204 21:08:01.842356   72678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:08:01.852300   72678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:08:01.862647   72678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:08:01.872102   72678 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:08:01.872126   72678 kubeadm.go:157] found existing configuration files:
	
	I1204 21:08:01.872172   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:08:01.880788   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:08:01.880840   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:08:01.890064   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:08:01.899108   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:08:01.899166   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:08:01.908183   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:08:01.917796   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:08:01.917866   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:08:01.927037   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:08:01.936099   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:08:01.936163   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:08:01.944827   72678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:08:02.152043   72678 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:08:05.806658   71124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:05.806723   71124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:05.806808   71124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:05.806957   71124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:05.807053   71124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:05.807107   71124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:05.808766   71124 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:05.808862   71124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:05.808951   71124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:05.809047   71124 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:08:05.809148   71124 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:08:05.809259   71124 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:08:05.809337   71124 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:08:05.809416   71124 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:08:05.809574   71124 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-534766] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1204 21:08:05.809627   71124 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:08:05.809762   71124 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-534766] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1204 21:08:05.809842   71124 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:08:05.809897   71124 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:08:05.809936   71124 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:08:05.809984   71124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:05.810027   71124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:05.810080   71124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:05.810150   71124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:05.810241   71124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:05.810321   71124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:05.810432   71124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:05.810515   71124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:05.812020   71124 out.go:235]   - Booting up control plane ...
	I1204 21:08:05.812111   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:05.812217   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:05.812310   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:05.812417   71124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:05.812491   71124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:05.812530   71124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:05.812637   71124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:05.812751   71124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:05.812812   71124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002328443s
	I1204 21:08:05.812877   71124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:05.812957   71124 kubeadm.go:310] [api-check] The API server is healthy after 7.501948033s
	I1204 21:08:05.813084   71124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:05.813249   71124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:05.813340   71124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:05.813518   71124 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:05.813572   71124 kubeadm.go:310] [bootstrap-token] Using token: 5sdsjw.bvoxkeqlpemcqy5p
	I1204 21:08:05.814883   71124 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:05.814980   71124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:05.815065   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:05.815192   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:05.815317   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:05.815443   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:05.815526   71124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:05.815636   71124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:05.815700   71124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:05.815748   71124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:05.815754   71124 kubeadm.go:310] 
	I1204 21:08:05.815812   71124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:05.815821   71124 kubeadm.go:310] 
	I1204 21:08:05.815913   71124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:05.815929   71124 kubeadm.go:310] 
	I1204 21:08:05.815969   71124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:05.816046   71124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:05.816118   71124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:05.816144   71124 kubeadm.go:310] 
	I1204 21:08:05.816205   71124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:05.816213   71124 kubeadm.go:310] 
	I1204 21:08:05.816264   71124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:05.816271   71124 kubeadm.go:310] 
	I1204 21:08:05.816333   71124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:05.816431   71124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:05.816525   71124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:05.816534   71124 kubeadm.go:310] 
	I1204 21:08:05.816631   71124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:05.816741   71124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:05.816755   71124 kubeadm.go:310] 
	I1204 21:08:05.816873   71124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5sdsjw.bvoxkeqlpemcqy5p \
	I1204 21:08:05.817016   71124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:05.817049   71124 kubeadm.go:310] 	--control-plane 
	I1204 21:08:05.817058   71124 kubeadm.go:310] 
	I1204 21:08:05.817153   71124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:05.817161   71124 kubeadm.go:310] 
	I1204 21:08:05.817237   71124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5sdsjw.bvoxkeqlpemcqy5p \
	I1204 21:08:05.817333   71124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:05.817343   71124 cni.go:84] Creating CNI manager for ""
	I1204 21:08:05.817348   71124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:05.818866   71124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:02.307695   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:04.483754   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:05.820114   71124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:05.830079   71124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:05.852330   71124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:05.852413   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:05.852508   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_08_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:08:05.890450   71124 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:05.974841   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:06.475480   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:06.975296   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:07.475531   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:07.975657   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:08.475843   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:08.975210   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:09.475478   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:09.975481   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:10.101313   71124 kubeadm.go:1113] duration metric: took 4.248969972s to wait for elevateKubeSystemPrivileges
	I1204 21:08:10.101355   71124 kubeadm.go:394] duration metric: took 17.979262665s to StartCluster
	I1204 21:08:10.101378   71124 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:10.101479   71124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:10.102864   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:10.103109   71124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 21:08:10.103120   71124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:10.103211   71124 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:10.103313   71124 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:08:10.103323   71124 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:08:10.103352   71124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:08:10.103328   71124 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:10.103332   71124 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	I1204 21:08:10.103446   71124 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:08:10.103885   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.103877   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.103934   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.103948   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.104983   71124 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:10.106480   71124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:10.123785   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I1204 21:08:10.123883   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1204 21:08:10.124234   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.124374   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.124861   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.124882   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.125030   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.125056   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.125341   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.125532   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.125585   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.126142   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.126182   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.129193   71124 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	I1204 21:08:10.129238   71124 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:08:10.129588   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.129625   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.147526   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1204 21:08:10.148155   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.148810   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.148836   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.149351   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.149615   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.149869   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I1204 21:08:10.150446   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.150915   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.150937   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.151260   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.151885   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.151928   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.152627   71124 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:08:10.154482   71124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:08:10.155995   71124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:10.156019   71124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:08:10.156039   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:08:10.159786   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.160295   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:08:10.160318   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.160502   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:08:10.160663   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:08:10.160762   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:08:10.160853   71124 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:08:10.170522   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I1204 21:08:10.171135   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.171748   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.171777   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.172306   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.172734   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.174732   71124 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:08:10.174990   71124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:10.175009   71124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:08:10.175027   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:08:10.181811   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.182352   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:08:10.182375   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.182524   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:08:10.182650   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:08:10.182745   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:08:10.182842   71124 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:08:10.375812   71124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:10.375873   71124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 21:08:10.533947   71124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:10.544659   71124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:06.983355   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:09.481478   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:10.975489   59256 pod_ready.go:82] duration metric: took 4m0.000079111s for pod "kube-proxy-dbc82" in "kube-system" namespace to be "Ready" ...
	E1204 21:08:10.975526   59256 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-dbc82" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:08:10.975573   59256 pod_ready.go:39] duration metric: took 4m8.550625089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:10.975619   59256 kubeadm.go:597] duration metric: took 4m16.380476362s to restartPrimaryControlPlane
	W1204 21:08:10.975675   59256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:08:10.975709   59256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:08:11.312337   71124 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:08:11.312836   71124 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1204 21:08:11.358308   71124 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:08:11.358336   71124 node_ready.go:38] duration metric: took 45.964875ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:08:11.358349   71124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:11.388209   71124 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:11.730639   71124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.196645778s)
	I1204 21:08:11.730680   71124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18598582s)
	I1204 21:08:11.730713   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.730717   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.730728   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.730729   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731059   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.731085   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.731124   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731138   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731156   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.731166   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731177   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731189   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731197   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.731207   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731508   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731522   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731616   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731629   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731646   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.765720   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.765747   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.766098   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.766117   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.767695   71124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 21:08:11.970711   72678 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:11.970779   72678 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:11.970869   72678 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:11.970973   72678 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:11.971050   72678 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:11.971103   72678 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:11.972789   72678 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:11.972883   72678 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:11.972975   72678 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:11.973053   72678 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:08:11.973145   72678 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:08:11.973229   72678 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:08:11.973290   72678 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:08:11.973372   72678 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:08:11.973524   72678 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-566991 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1204 21:08:11.973603   72678 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:08:11.973776   72678 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-566991 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1204 21:08:11.973870   72678 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:08:11.973983   72678 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:08:11.974053   72678 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:08:11.974136   72678 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:11.974188   72678 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:11.974280   72678 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:11.974357   72678 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:11.974448   72678 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:11.974528   72678 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:11.974624   72678 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:11.974722   72678 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:11.976999   72678 out.go:235]   - Booting up control plane ...
	I1204 21:08:11.977120   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:11.977230   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:11.977318   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:11.977449   72678 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:11.977566   72678 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:11.977625   72678 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:11.977788   72678 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:11.977923   72678 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:11.978002   72678 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.157034ms
	I1204 21:08:11.978094   72678 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:11.978169   72678 kubeadm.go:310] [api-check] The API server is healthy after 5.001257785s
	I1204 21:08:11.978306   72678 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:11.978469   72678 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:11.978540   72678 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:11.978690   72678 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-566991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:11.978735   72678 kubeadm.go:310] [bootstrap-token] Using token: gnzm5j.1tnplqkzht748ruw
	I1204 21:08:11.980124   72678 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:11.980237   72678 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:11.980320   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:11.980463   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:11.980662   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:11.980798   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:11.980950   72678 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:11.981116   72678 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:11.981165   72678 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:11.981225   72678 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:11.981243   72678 kubeadm.go:310] 
	I1204 21:08:11.981331   72678 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:11.981349   72678 kubeadm.go:310] 
	I1204 21:08:11.981480   72678 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:11.981491   72678 kubeadm.go:310] 
	I1204 21:08:11.981512   72678 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:11.981560   72678 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:11.981618   72678 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:11.981627   72678 kubeadm.go:310] 
	I1204 21:08:11.981700   72678 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:11.981709   72678 kubeadm.go:310] 
	I1204 21:08:11.981780   72678 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:11.981788   72678 kubeadm.go:310] 
	I1204 21:08:11.981864   72678 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:11.981969   72678 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:11.982061   72678 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:11.982073   72678 kubeadm.go:310] 
	I1204 21:08:11.982208   72678 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:11.982304   72678 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:11.982321   72678 kubeadm.go:310] 
	I1204 21:08:11.982425   72678 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gnzm5j.1tnplqkzht748ruw \
	I1204 21:08:11.982565   72678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:11.982598   72678 kubeadm.go:310] 	--control-plane 
	I1204 21:08:11.982615   72678 kubeadm.go:310] 
	I1204 21:08:11.982753   72678 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:11.982773   72678 kubeadm.go:310] 
	I1204 21:08:11.982887   72678 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gnzm5j.1tnplqkzht748ruw \
	I1204 21:08:11.983045   72678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:11.983059   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:08:11.983067   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:11.984514   72678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:11.985605   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:11.997506   72678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:12.020287   72678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:12.020382   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:12.020478   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-566991 minikube.k8s.io/updated_at=2024_12_04T21_08_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=embed-certs-566991 minikube.k8s.io/primary=true
	I1204 21:08:10.340409   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:08:10.340653   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:08:11.769095   71124 addons.go:510] duration metric: took 1.665888351s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 21:08:11.829492   71124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-534766" context rescaled to 1 replicas
	I1204 21:08:13.395202   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:12.282472   72678 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:12.282643   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:12.782677   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:13.283401   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:13.783449   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:14.283601   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:14.783053   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:15.283662   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:15.783430   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:16.282965   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:16.378013   72678 kubeadm.go:1113] duration metric: took 4.357680671s to wait for elevateKubeSystemPrivileges
	I1204 21:08:16.378093   72678 kubeadm.go:394] duration metric: took 14.572753854s to StartCluster
	I1204 21:08:16.378117   72678 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:16.378206   72678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:16.380404   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:16.380708   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 21:08:16.380704   72678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:16.380732   72678 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:16.380819   72678 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:08:16.380887   72678 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	I1204 21:08:16.380931   72678 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:08:16.380939   72678 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:16.380830   72678 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:08:16.381008   72678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:08:16.381419   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.381474   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.381489   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.381525   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.383297   72678 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:16.384666   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:16.399427   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I1204 21:08:16.399666   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I1204 21:08:16.399962   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.400084   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.400579   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.400607   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.400700   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.400722   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.400950   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.401121   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.401299   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.401552   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.401598   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.405211   72678 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	I1204 21:08:16.405292   72678 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:08:16.406038   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.406122   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.418432   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I1204 21:08:16.419079   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.419821   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.419841   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.420212   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.420435   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.422352   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:08:16.425112   72678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:08:16.426069   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I1204 21:08:16.426340   72678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:16.426356   72678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:08:16.426373   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:08:16.426450   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.427255   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.427283   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.427748   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.428381   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.428424   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.429483   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.430011   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:08:16.430036   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.430185   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:08:16.430362   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:08:16.430553   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:08:16.430711   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:08:16.445952   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I1204 21:08:16.446471   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.446968   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.446991   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.447416   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.447784   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.449656   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:08:16.449846   72678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:16.449868   72678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:08:16.449886   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:08:16.452557   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.452849   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:08:16.452867   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.452998   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:08:16.453117   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:08:16.453207   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:08:16.453292   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:08:16.679677   72678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:16.679743   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 21:08:16.800987   72678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:16.916178   72678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:17.238797   72678 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 21:08:17.238994   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.239021   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.239400   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.239421   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.239425   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.239442   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.239458   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.240115   72678 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:08:17.240414   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.240426   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.240442   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.256226   72678 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:08:17.256249   72678 node_ready.go:38] duration metric: took 16.113392ms for node "embed-certs-566991" to be "Ready" ...
	I1204 21:08:17.256258   72678 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:17.259096   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.259124   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.259451   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.259474   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.259515   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.271740   72678 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:17.757039   72678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-566991" context rescaled to 1 replicas
	I1204 21:08:17.808303   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.808336   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.808628   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.808669   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.808676   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.808685   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.808693   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.808938   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.808975   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.808989   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.810921   72678 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 21:08:15.894264   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:17.897138   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:20.395536   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:17.812308   72678 addons.go:510] duration metric: took 1.431577708s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 21:08:19.277994   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:21.794982   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:22.391369   71124 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g7zhj" not found
	I1204 21:08:22.391429   71124 pod_ready.go:82] duration metric: took 11.003188512s for pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace to be "Ready" ...
	E1204 21:08:22.391441   71124 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g7zhj" not found
	I1204 21:08:22.391448   71124 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:24.397257   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:24.277468   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:26.778854   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:27.188853   59256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.213112718s)
	I1204 21:08:27.188938   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:27.203898   59256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:08:27.213582   59256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:08:27.223300   59256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:08:27.223321   59256 kubeadm.go:157] found existing configuration files:
	
	I1204 21:08:27.223358   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:08:27.232202   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:08:27.232276   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:08:27.241363   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:08:27.250252   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:08:27.250303   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:08:27.259133   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:08:27.268175   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:08:27.268239   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:08:27.277670   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:08:27.286990   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:08:27.287048   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:08:27.296403   59256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:08:27.340569   59256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:27.340625   59256 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:27.445878   59256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:27.446054   59256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:27.446202   59256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:27.454808   59256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:27.456687   59256 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:27.456793   59256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:27.456884   59256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:27.457023   59256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:08:27.457128   59256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:08:27.457263   59256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:08:27.457352   59256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:08:27.457451   59256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:08:27.457536   59256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:08:27.457668   59256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:08:27.457771   59256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:08:27.457825   59256 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:08:27.457915   59256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:27.671215   59256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:27.897750   59256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:28.005069   59256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:28.099087   59256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:28.311253   59256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:28.311880   59256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:28.314578   59256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:26.397414   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:28.399521   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:28.316533   59256 out.go:235]   - Booting up control plane ...
	I1204 21:08:28.316662   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:28.316780   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:28.317037   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:28.335526   59256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:28.341982   59256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:28.342046   59256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:28.481518   59256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:28.481663   59256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:29.483448   59256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001974582s
	I1204 21:08:29.483539   59256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:29.277617   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:31.279586   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:33.986377   59256 kubeadm.go:310] [api-check] The API server is healthy after 4.502987113s
	I1204 21:08:34.005118   59256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:34.023455   59256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:34.054044   59256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:34.054233   59256 kubeadm.go:310] [mark-control-plane] Marking the node pause-998149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:34.066313   59256 kubeadm.go:310] [bootstrap-token] Using token: xc4tcq.dtygxz0gdfvy2txn
	I1204 21:08:34.067829   59256 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:34.067956   59256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:34.073198   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:34.083524   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:34.087117   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:34.092501   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:34.097860   59256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:34.395239   59256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:34.820061   59256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:35.395146   59256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:35.395190   59256 kubeadm.go:310] 
	I1204 21:08:35.395284   59256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:35.395297   59256 kubeadm.go:310] 
	I1204 21:08:35.395425   59256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:35.395443   59256 kubeadm.go:310] 
	I1204 21:08:35.395493   59256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:35.395588   59256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:35.395670   59256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:35.395685   59256 kubeadm.go:310] 
	I1204 21:08:35.395764   59256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:35.395774   59256 kubeadm.go:310] 
	I1204 21:08:35.395857   59256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:35.395874   59256 kubeadm.go:310] 
	I1204 21:08:35.395942   59256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:35.396052   59256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:35.396160   59256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:35.396172   59256 kubeadm.go:310] 
	I1204 21:08:35.396287   59256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:35.396385   59256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:35.396396   59256 kubeadm.go:310] 
	I1204 21:08:35.396502   59256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xc4tcq.dtygxz0gdfvy2txn \
	I1204 21:08:35.396626   59256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:35.396662   59256 kubeadm.go:310] 	--control-plane 
	I1204 21:08:35.396670   59256 kubeadm.go:310] 
	I1204 21:08:35.396773   59256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:35.396785   59256 kubeadm.go:310] 
	I1204 21:08:35.396888   59256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xc4tcq.dtygxz0gdfvy2txn \
	I1204 21:08:35.397034   59256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:35.398051   59256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:08:35.398135   59256 cni.go:84] Creating CNI manager for ""
	I1204 21:08:35.398152   59256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:35.399886   59256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:30.898268   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:33.398208   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:35.398254   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:35.401133   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:35.412531   59256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:35.431751   59256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:35.431842   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:35.431849   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-998149 minikube.k8s.io/updated_at=2024_12_04T21_08_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=pause-998149 minikube.k8s.io/primary=true
	I1204 21:08:35.567045   59256 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:35.585450   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:36.085573   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:33.778383   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:36.276959   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:36.585830   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:37.085688   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:37.585500   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:38.086326   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:38.586132   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.086141   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.585741   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.690656   59256 kubeadm.go:1113] duration metric: took 4.258882564s to wait for elevateKubeSystemPrivileges
	I1204 21:08:39.690699   59256 kubeadm.go:394] duration metric: took 4m45.203022063s to StartCluster
	I1204 21:08:39.690716   59256 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:39.690831   59256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:39.692398   59256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:39.692660   59256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:39.692727   59256 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:39.692909   59256 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:39.694779   59256 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:39.694782   59256 out.go:177] * Enabled addons: 
	I1204 21:08:37.398430   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:39.400020   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:39.696270   59256 addons.go:510] duration metric: took 3.552911ms for enable addons: enabled=[]
	I1204 21:08:39.696294   59256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:39.857232   59256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:39.877674   59256 node_ready.go:35] waiting up to 6m0s for node "pause-998149" to be "Ready" ...
	I1204 21:08:39.898413   59256 node_ready.go:49] node "pause-998149" has status "Ready":"True"
	I1204 21:08:39.898437   59256 node_ready.go:38] duration metric: took 20.724831ms for node "pause-998149" to be "Ready" ...
	I1204 21:08:39.898446   59256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:39.908341   59256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:38.278077   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:40.278513   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:41.898995   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.400027   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:41.916228   59256 pod_ready.go:103] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.416909   59256 pod_ready.go:103] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:45.920521   59256 pod_ready.go:93] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:45.920546   59256 pod_ready.go:82] duration metric: took 6.012180216s for pod "etcd-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:45.920557   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:42.778432   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.779362   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:47.926658   59256 pod_ready.go:103] pod "kube-apiserver-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:48.427851   59256 pod_ready.go:93] pod "kube-apiserver-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.427877   59256 pod_ready.go:82] duration metric: took 2.507313013s for pod "kube-apiserver-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.427889   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.934599   59256 pod_ready.go:93] pod "kube-controller-manager-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.934624   59256 pod_ready.go:82] duration metric: took 506.730073ms for pod "kube-controller-manager-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.934635   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7pttk" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.939665   59256 pod_ready.go:93] pod "kube-proxy-7pttk" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.939688   59256 pod_ready.go:82] duration metric: took 5.04711ms for pod "kube-proxy-7pttk" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.939697   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.943862   59256 pod_ready.go:93] pod "kube-scheduler-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.943882   59256 pod_ready.go:82] duration metric: took 4.179617ms for pod "kube-scheduler-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.943889   59256 pod_ready.go:39] duration metric: took 9.045432923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:48.943905   59256 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:08:48.943951   59256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:08:48.958885   59256 api_server.go:72] duration metric: took 9.266187782s to wait for apiserver process to appear ...
	I1204 21:08:48.958916   59256 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:08:48.958939   59256 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I1204 21:08:48.963978   59256 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I1204 21:08:48.965158   59256 api_server.go:141] control plane version: v1.31.2
	I1204 21:08:48.965186   59256 api_server.go:131] duration metric: took 6.261921ms to wait for apiserver health ...
	I1204 21:08:48.965197   59256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:08:48.973273   59256 system_pods.go:59] 7 kube-system pods found
	I1204 21:08:48.973310   59256 system_pods.go:61] "coredns-7c65d6cfc9-26bcn" [c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6] Running
	I1204 21:08:48.973319   59256 system_pods.go:61] "coredns-7c65d6cfc9-kfdvp" [935bb8b3-28ee-47d5-a525-b5bc7d882a63] Running
	I1204 21:08:48.973325   59256 system_pods.go:61] "etcd-pause-998149" [e836a3f5-4def-4413-a47c-d61eb18454e0] Running
	I1204 21:08:48.973330   59256 system_pods.go:61] "kube-apiserver-pause-998149" [51be41ea-85c5-4ef3-b980-d29c0ff81e50] Running
	I1204 21:08:48.973336   59256 system_pods.go:61] "kube-controller-manager-pause-998149" [10333131-5aab-4acb-8cf6-474a76909b71] Running
	I1204 21:08:48.973341   59256 system_pods.go:61] "kube-proxy-7pttk" [b9aa9037-580d-4c10-ba44-1e3925516a2a] Running
	I1204 21:08:48.973345   59256 system_pods.go:61] "kube-scheduler-pause-998149" [2eb295d4-abd3-4fd9-a972-f67b8887580b] Running
	I1204 21:08:48.973353   59256 system_pods.go:74] duration metric: took 8.149225ms to wait for pod list to return data ...
	I1204 21:08:48.973368   59256 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:08:48.976388   59256 default_sa.go:45] found service account: "default"
	I1204 21:08:48.976410   59256 default_sa.go:55] duration metric: took 3.03664ms for default service account to be created ...
	I1204 21:08:48.976418   59256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:08:49.027461   59256 system_pods.go:86] 7 kube-system pods found
	I1204 21:08:49.027495   59256 system_pods.go:89] "coredns-7c65d6cfc9-26bcn" [c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6] Running
	I1204 21:08:49.027502   59256 system_pods.go:89] "coredns-7c65d6cfc9-kfdvp" [935bb8b3-28ee-47d5-a525-b5bc7d882a63] Running
	I1204 21:08:49.027506   59256 system_pods.go:89] "etcd-pause-998149" [e836a3f5-4def-4413-a47c-d61eb18454e0] Running
	I1204 21:08:49.027512   59256 system_pods.go:89] "kube-apiserver-pause-998149" [51be41ea-85c5-4ef3-b980-d29c0ff81e50] Running
	I1204 21:08:49.027517   59256 system_pods.go:89] "kube-controller-manager-pause-998149" [10333131-5aab-4acb-8cf6-474a76909b71] Running
	I1204 21:08:49.027523   59256 system_pods.go:89] "kube-proxy-7pttk" [b9aa9037-580d-4c10-ba44-1e3925516a2a] Running
	I1204 21:08:49.027529   59256 system_pods.go:89] "kube-scheduler-pause-998149" [2eb295d4-abd3-4fd9-a972-f67b8887580b] Running
	I1204 21:08:49.027538   59256 system_pods.go:126] duration metric: took 51.113088ms to wait for k8s-apps to be running ...
	I1204 21:08:49.027550   59256 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:08:49.027606   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:49.042605   59256 system_svc.go:56] duration metric: took 15.043872ms WaitForService to wait for kubelet
	I1204 21:08:49.042644   59256 kubeadm.go:582] duration metric: took 9.349953022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:08:49.042670   59256 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:08:49.225365   59256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:08:49.225400   59256 node_conditions.go:123] node cpu capacity is 2
	I1204 21:08:49.225414   59256 node_conditions.go:105] duration metric: took 182.738924ms to run NodePressure ...
	I1204 21:08:49.225433   59256 start.go:241] waiting for startup goroutines ...
	I1204 21:08:49.225443   59256 start.go:246] waiting for cluster config update ...
	I1204 21:08:49.225454   59256 start.go:255] writing updated cluster config ...
	I1204 21:08:49.225788   59256 ssh_runner.go:195] Run: rm -f paused
	I1204 21:08:49.274311   59256 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:08:49.277409   59256 out.go:177] * Done! kubectl is now configured to use "pause-998149" cluster and "default" namespace by default
	W1204 21:08:49.287265   59256 root.go:91] failed to log command end to audit: failed to find a log row with id equals to ce586688-68db-401e-9e1f-efff798fec8c
	
	
	==> CRI-O <==
	Dec 04 21:08:49 pause-998149 crio[2728]: time="2024-12-04 21:08:49.961525021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346529961500587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13739e98-d262-4687-920c-25c332d9b174 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:49 pause-998149 crio[2728]: time="2024-12-04 21:08:49.962113882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e20601a-2460-44ac-bd05-e72e67aef719 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:49 pause-998149 crio[2728]: time="2024-12-04 21:08:49.962217757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e20601a-2460-44ac-bd05-e72e67aef719 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:49 pause-998149 crio[2728]: time="2024-12-04 21:08:49.962497772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e20601a-2460-44ac-bd05-e72e67aef719 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.003318106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9cc9bdf-b0ad-4e17-a81e-9a8f7cf4576f name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.003411871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9cc9bdf-b0ad-4e17-a81e-9a8f7cf4576f name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.005163689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28360048-51a7-4cab-9c98-7e33e64d7f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.005668461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346530005640534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28360048-51a7-4cab-9c98-7e33e64d7f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.006403071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbce0637-7f50-483a-8b19-d9235f72b192 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.006469104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbce0637-7f50-483a-8b19-d9235f72b192 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.006690725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbce0637-7f50-483a-8b19-d9235f72b192 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.047236891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37b2779b-c2d5-47a3-8e9b-50770d1cb89b name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.047328008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37b2779b-c2d5-47a3-8e9b-50770d1cb89b name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.048137406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f06add64-a847-4d53-b536-4484258f1cde name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.048644406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346530048618099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f06add64-a847-4d53-b536-4484258f1cde name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.049124963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d232971-8e28-4a48-9442-fdd3b0f55149 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.049224897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d232971-8e28-4a48-9442-fdd3b0f55149 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.049437909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d232971-8e28-4a48-9442-fdd3b0f55149 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.086782761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd335ad6-b3b2-485c-97ee-375dc1c7fdf3 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.086911775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd335ad6-b3b2-485c-97ee-375dc1c7fdf3 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.088601263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46388290-f1ab-4c82-a85d-cc8f812ecf48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.089410723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346530089374703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46388290-f1ab-4c82-a85d-cc8f812ecf48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.090464829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fb776b2-84a5-4575-9d2a-42fc33b6d60d name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.090549070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fb776b2-84a5-4575-9d2a-42fc33b6d60d name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:50 pause-998149 crio[2728]: time="2024-12-04 21:08:50.090757821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fb776b2-84a5-4575-9d2a-42fc33b6d60d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fc6609ca2776       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   8 seconds ago       Running             kube-proxy                0                   b698035ef5a94       kube-proxy-7pttk
	e112e39dfc622       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   0                   24ba88c8cf40c       coredns-7c65d6cfc9-26bcn
	205acfde19589       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   0                   23cc6ff77c824       coredns-7c65d6cfc9-kfdvp
	6c9becd8f0cc4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 seconds ago      Running             kube-apiserver            3                   61aa6a222b05f       kube-apiserver-pause-998149
	bb28c2641a67b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   20 seconds ago      Running             kube-scheduler            3                   fcbf293fe00ee       kube-scheduler-pause-998149
	46cc096ccdfdb       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   20 seconds ago      Running             kube-controller-manager   3                   990ab76226072       kube-controller-manager-pause-998149
	53022095ca4eb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago      Running             etcd                      3                   b85dcb73b6593       etcd-pause-998149
	ebbf0704fb917       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   4 minutes ago       Exited              kube-apiserver            2                   955e5ffba3992       kube-apiserver-pause-998149
	
	
	==> coredns [205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               pause-998149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-998149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=pause-998149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_08_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-998149
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.167
	  Hostname:    pause-998149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f85cb28c4cff47e68afdc5807112b2db
	  System UUID:                f85cb28c-4cff-47e6-8afd-c5807112b2db
	  Boot ID:                    4186bd33-7707-490f-85b7-576317de36f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-26bcn                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10s
	  kube-system                 coredns-7c65d6cfc9-kfdvp                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10s
	  kube-system                 etcd-pause-998149                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         16s
	  kube-system                 kube-apiserver-pause-998149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-pause-998149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-proxy-7pttk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-pause-998149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (12%)  340Mi (17%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 8s    kube-proxy       
	  Normal  Starting                 16s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16s   kubelet          Node pause-998149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s   kubelet          Node pause-998149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s   kubelet          Node pause-998149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11s   node-controller  Node pause-998149 event: Registered Node pause-998149 in Controller
	
	
	==> dmesg <==
	[  +0.105855] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.243118] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.889594] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.821156] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.059696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.973866] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.074340] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.786369] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	[  +0.793522] kauditd_printk_skb: 43 callbacks suppressed
	[Dec 4 21:02] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.996985] systemd-fstab-generator[2461]: Ignoring "noauto" option for root device
	[  +0.220561] systemd-fstab-generator[2473]: Ignoring "noauto" option for root device
	[  +0.284199] systemd-fstab-generator[2524]: Ignoring "noauto" option for root device
	[  +0.197846] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +0.473800] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[Dec 4 21:03] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.097288] kauditd_printk_skb: 174 callbacks suppressed
	[  +2.195757] systemd-fstab-generator[2965]: Ignoring "noauto" option for root device
	[Dec 4 21:04] kauditd_printk_skb: 84 callbacks suppressed
	[ +54.873498] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 4 21:08] systemd-fstab-generator[4052]: Ignoring "noauto" option for root device
	[  +6.051388] systemd-fstab-generator[4378]: Ignoring "noauto" option for root device
	[  +0.087613] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.218614] systemd-fstab-generator[4488]: Ignoring "noauto" option for root device
	[  +0.094707] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442] <==
	{"level":"info","ts":"2024-12-04T21:08:29.952851Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T21:08:29.953526Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7664cfa1ff0dacf1","initial-advertise-peer-urls":["https://192.168.50.167:2380"],"listen-peer-urls":["https://192.168.50.167:2380"],"advertise-client-urls":["https://192.168.50.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T21:08:29.953584Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T21:08:29.953710Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-12-04T21:08:29.953739Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-12-04T21:08:30.591242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgPreVoteResp from 7664cfa1ff0dacf1 at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgVoteResp from 7664cfa1ff0dacf1 at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7664cfa1ff0dacf1 elected leader 7664cfa1ff0dacf1 at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.595403Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7664cfa1ff0dacf1","local-member-attributes":"{Name:pause-998149 ClientURLs:[https://192.168.50.167:2379]}","request-path":"/0/members/7664cfa1ff0dacf1/attributes","cluster-id":"d7e89ab1d6ffbfaa","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T21:08:30.595520Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:08:30.595566Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.599220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:08:30.599255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:08:30.595591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:08:30.601443Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d7e89ab1d6ffbfaa","local-member-id":"7664cfa1ff0dacf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601563Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601608Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601914Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:08:30.604389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:08:30.605080Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.167:2379"}
	{"level":"info","ts":"2024-12-04T21:08:30.607746Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:08:50 up 7 min,  0 users,  load average: 1.33, 0.44, 0.19
	Linux pause-998149 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7] <==
	I1204 21:08:32.377326       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1204 21:08:32.377819       1 controller.go:615] quota admission added evaluator for: namespaces
	E1204 21:08:32.381742       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1204 21:08:32.387208       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1204 21:08:32.387305       1 aggregator.go:171] initial CRD sync complete...
	I1204 21:08:32.387315       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 21:08:32.387329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 21:08:32.387336       1 cache.go:39] Caches are synced for autoregister controller
	I1204 21:08:32.399753       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 21:08:32.586119       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 21:08:33.183243       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1204 21:08:33.189114       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1204 21:08:33.189125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 21:08:33.739299       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 21:08:33.793371       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 21:08:33.886646       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 21:08:33.893690       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.167]
	I1204 21:08:33.894716       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 21:08:33.900547       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 21:08:34.281791       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 21:08:34.776704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 21:08:34.789333       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 21:08:34.800622       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 21:08:39.661819       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 21:08:39.789132       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723] <==
	I1204 21:08:16.695805       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1204 21:08:16.695868       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1204 21:08:16.695889       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I1204 21:08:16.695917       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1204 21:08:16.695965       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1204 21:08:16.696038       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1204 21:08:16.696066       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I1204 21:08:16.696357       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1204 21:08:16.696611       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1204 21:08:16.696864       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1204 21:08:16.696984       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1204 21:08:16.697030       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1204 21:08:16.697278       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1204 21:08:16.697397       1 secure_serving.go:258] Stopped listening on [::]:8443
	I1204 21:08:16.697430       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1204 21:08:16.695409       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1204 21:08:16.697788       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1204 21:08:16.695453       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1204 21:08:16.698074       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1204 21:08:16.698531       1 controller.go:157] Shutting down quota evaluator
	I1204 21:08:16.698670       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699289       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699517       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699511       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699664       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497] <==
	I1204 21:08:39.040598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	I1204 21:08:39.095520       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1204 21:08:39.133398       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1204 21:08:39.151405       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 21:08:39.178835       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1204 21:08:39.231447       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1204 21:08:39.235165       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 21:08:39.281139       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1204 21:08:39.281276       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1204 21:08:39.281375       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1204 21:08:39.281289       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1204 21:08:39.661129       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 21:08:39.720228       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 21:08:39.720354       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 21:08:39.880217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	I1204 21:08:40.175255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="366.553866ms"
	I1204 21:08:40.221443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.99712ms"
	I1204 21:08:40.221544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="50.55µs"
	I1204 21:08:41.809848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.025µs"
	I1204 21:08:41.856955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.947µs"
	I1204 21:08:43.189717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.235724ms"
	I1204 21:08:43.190874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.997µs"
	I1204 21:08:44.340041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.885547ms"
	I1204 21:08:44.340105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.056µs"
	I1204 21:08:45.273879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	
	
	==> kube-proxy [3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:08:41.800468       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:08:41.831402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.167"]
	E1204 21:08:41.831615       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:08:41.893545       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:08:41.893603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:08:41.893649       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:08:41.896042       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:08:41.896432       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:08:41.896454       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:08:41.897946       1 config.go:199] "Starting service config controller"
	I1204 21:08:41.898001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:08:41.898063       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:08:41.898079       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:08:41.900456       1 config.go:328] "Starting node config controller"
	I1204 21:08:41.900597       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:08:41.998796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:08:41.998928       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:08:42.000678       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575] <==
	E1204 21:08:32.329546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.327918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:32.329597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.327943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 21:08:32.329650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.328011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:08:32.329702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E1204 21:08:32.329022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.168677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.168713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.200983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.201084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.283491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:08:33.283741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.336277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 21:08:33.336445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.373267       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:08:33.373418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.415390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.415539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.460437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 21:08:33.460530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.604498       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:08:33.604878       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:08:35.314218       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:08:35 pause-998149 kubelet[4385]: I1204 21:08:35.843718    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-998149" podStartSLOduration=1.843700901 podStartE2EDuration="1.843700901s" podCreationTimestamp="2024-12-04 21:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:35.829562068 +0000 UTC m=+1.278621107" watchObservedRunningTime="2024-12-04 21:08:35.843700901 +0000 UTC m=+1.292759940"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: W1204 21:08:39.716596    4385 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-998149" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-998149' and this object
	Dec 04 21:08:39 pause-998149 kubelet[4385]: E1204 21:08:39.716648    4385 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-998149\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-998149' and this object" logger="UnhandledError"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: W1204 21:08:39.716717    4385 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-998149" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-998149' and this object
	Dec 04 21:08:39 pause-998149 kubelet[4385]: E1204 21:08:39.716727    4385 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-998149\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-998149' and this object" logger="UnhandledError"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725038    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2g4\" (UniqueName: \"kubernetes.io/projected/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-api-access-9t2g4\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725083    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9aa9037-580d-4c10-ba44-1e3925516a2a-xtables-lock\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725100    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725120    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9aa9037-580d-4c10-ba44-1e3925516a2a-lib-modules\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.227821    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/935bb8b3-28ee-47d5-a525-b5bc7d882a63-config-volume\") pod \"coredns-7c65d6cfc9-kfdvp\" (UID: \"935bb8b3-28ee-47d5-a525-b5bc7d882a63\") " pod="kube-system/coredns-7c65d6cfc9-kfdvp"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.227967    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6-config-volume\") pod \"coredns-7c65d6cfc9-26bcn\" (UID: \"c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6\") " pod="kube-system/coredns-7c65d6cfc9-26bcn"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.228047    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2xg\" (UniqueName: \"kubernetes.io/projected/935bb8b3-28ee-47d5-a525-b5bc7d882a63-kube-api-access-7d2xg\") pod \"coredns-7c65d6cfc9-kfdvp\" (UID: \"935bb8b3-28ee-47d5-a525-b5bc7d882a63\") " pod="kube-system/coredns-7c65d6cfc9-kfdvp"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.228092    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vlv\" (UniqueName: \"kubernetes.io/projected/c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6-kube-api-access-w8vlv\") pod \"coredns-7c65d6cfc9-26bcn\" (UID: \"c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6\") " pod="kube-system/coredns-7c65d6cfc9-26bcn"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.692088    4385 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: E1204 21:08:40.827302    4385 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 04 21:08:40 pause-998149 kubelet[4385]: E1204 21:08:40.827484    4385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy podName:b9aa9037-580d-4c10-ba44-1e3925516a2a nodeName:}" failed. No retries permitted until 2024-12-04 21:08:41.327446969 +0000 UTC m=+6.776505992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy") pod "kube-proxy-7pttk" (UID: "b9aa9037-580d-4c10-ba44-1e3925516a2a") : failed to sync configmap cache: timed out waiting for the condition
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.839244    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pttk" podStartSLOduration=2.839218391 podStartE2EDuration="2.839218391s" podCreationTimestamp="2024-12-04 21:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.837170731 +0000 UTC m=+7.286229771" watchObservedRunningTime="2024-12-04 21:08:41.839218391 +0000 UTC m=+7.288277430"
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.839358    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kfdvp" podStartSLOduration=1.8393524270000001 podStartE2EDuration="1.839352427s" podCreationTimestamp="2024-12-04 21:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.812432298 +0000 UTC m=+7.261491337" watchObservedRunningTime="2024-12-04 21:08:41.839352427 +0000 UTC m=+7.288411461"
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.856356    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-26bcn" podStartSLOduration=1.856336907 podStartE2EDuration="1.856336907s" podCreationTimestamp="2024-12-04 21:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.856092846 +0000 UTC m=+7.305151889" watchObservedRunningTime="2024-12-04 21:08:41.856336907 +0000 UTC m=+7.305395946"
	Dec 04 21:08:43 pause-998149 kubelet[4385]: I1204 21:08:43.152737    4385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: I1204 21:08:44.312386    4385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: E1204 21:08:44.753825    4385 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346524753508033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: E1204 21:08:44.753851    4385 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346524753508033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:08:45 pause-998149 kubelet[4385]: I1204 21:08:45.250169    4385 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 04 21:08:45 pause-998149 kubelet[4385]: I1204 21:08:45.251448    4385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-998149 -n pause-998149
helpers_test.go:261: (dbg) Run:  kubectl --context pause-998149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-998149 -n pause-998149
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-998149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-998149 logs -n 25: (1.207831768s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status kubelet --all                       |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat kubelet                                |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | journalctl -xeu kubelet --all                        |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status docker --all                        |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat docker                                 |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/docker/daemon.json                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo docker                         | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | system info                                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status cri-docker                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat cri-docker                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | cri-dockerd --version                                |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | systemctl status containerd                          |                    |         |         |                     |                     |
	|         | --all --full --no-pager                              |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat containerd                             |                    |         |         |                     |                     |
	|         | --no-pager                                           |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /lib/systemd/system/containerd.service               |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo cat                            | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/containerd/config.toml                          |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | containerd config dump                               |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                          |                    |         |         |                     |                     |
	|         | --full --no-pager                                    |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                        |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                           | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                    |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                    |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                           | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                               |                    |         |         |                     |                     |
	| delete  | -p bridge-272234                                     | bridge-272234      | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                | embed-certs-566991 | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC |                     |
	|         | --memory=2200                                        |                    |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                    |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                    |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                    |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                    |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:07:32
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:07:32.271420   72678 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:07:32.271650   72678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:07:32.271658   72678 out.go:358] Setting ErrFile to fd 2...
	I1204 21:07:32.271663   72678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:07:32.271853   72678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:07:32.272400   72678 out.go:352] Setting JSON to false
	I1204 21:07:32.273407   72678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6602,"bootTime":1733339850,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:07:32.273501   72678 start.go:139] virtualization: kvm guest
	I1204 21:07:32.275806   72678 out.go:177] * [embed-certs-566991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:07:32.277553   72678 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:07:32.277560   72678 notify.go:220] Checking for updates...
	I1204 21:07:32.280428   72678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:07:32.281753   72678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:07:32.283168   72678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.284464   72678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:07:32.285658   72678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:07:32.287197   72678 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:32.287322   72678 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:07:32.287476   72678 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:32.287586   72678 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:07:32.324819   72678 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:07:32.326107   72678 start.go:297] selected driver: kvm2
	I1204 21:07:32.326126   72678 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:07:32.326140   72678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:07:32.326855   72678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:07:32.326930   72678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:07:32.341855   72678 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:07:32.341893   72678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 21:07:32.342209   72678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:07:32.342243   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:07:32.342302   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:07:32.342318   72678 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 21:07:32.342385   72678 start.go:340] cluster config:
	{Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:07:32.342514   72678 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:07:32.344321   72678 out.go:177] * Starting "embed-certs-566991" primary control-plane node in "embed-certs-566991" cluster
	I1204 21:07:32.345628   72678 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:32.345658   72678 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:07:32.345666   72678 cache.go:56] Caching tarball of preloaded images
	I1204 21:07:32.345793   72678 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:07:32.345808   72678 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:07:32.345929   72678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:07:32.345954   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json: {Name:mkfcf7510ce9165fe8f524a3bbc4d0f339bc083d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:32.346108   72678 start.go:360] acquireMachinesLock for embed-certs-566991: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:07:32.346152   72678 start.go:364] duration metric: took 26.779µs to acquireMachinesLock for "embed-certs-566991"
	I1204 21:07:32.346185   72678 start.go:93] Provisioning new machine with config: &{Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:07:32.346251   72678 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 21:07:32.237793   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:07:32.240739   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:07:32.241241   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:07:32.241272   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:07:32.241443   71124 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:07:32.245656   71124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:32.257424   71124 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:07:32.257520   71124 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:32.257571   71124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:07:32.290226   71124 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:07:32.290250   71124 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:07:32.290302   71124 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:32.290340   71124 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.290363   71124 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.290384   71124 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.290403   71124 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.290526   71124 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.290556   71124 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.290522   71124 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:07:32.291965   71124 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.291974   71124 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.292041   71124 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.292054   71124 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.292051   71124 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.292202   71124 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:07:32.292377   71124 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.292978   71124 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:32.443658   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.454420   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.472092   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.479946   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.493775   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:07:32.502953   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.509238   71124 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:07:32.509281   71124 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.509329   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.522330   71124 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:07:32.522377   71124 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.522427   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.524563   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.574223   71124 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:07:32.574282   71124 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.574340   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.598135   71124 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I1204 21:07:32.598177   71124 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I1204 21:07:32.598223   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.598243   71124 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:07:32.598276   71124 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.598321   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623468   71124 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:07:32.623511   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.623522   71124 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.623561   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623569   71124 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:07:32.623600   71124 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.623609   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.623630   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:32.623623   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.623511   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.623657   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.720574   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.739257   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.739295   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.739344   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.739393   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.739424   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.739501   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.789308   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:07:32.876071   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1204 21:07:32.876081   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:07:32.906987   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:07:32.907023   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:07:32.907050   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:32.907163   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:32.960642   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:07:32.960772   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:32.967765   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:07:32.967851   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:32.967895   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1204 21:07:32.967986   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.021320   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:07:33.027548   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:07:33.027577   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:07:33.027609   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:07:33.027632   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:33.027638   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I1204 21:07:33.027669   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I1204 21:07:33.027690   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:33.027696   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.2': No such file or directory
	I1204 21:07:33.027709   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 --> /var/lib/minikube/images/kube-scheduler_v1.31.2 (20112896 bytes)
	I1204 21:07:33.027749   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I1204 21:07:33.027762   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I1204 21:07:33.127291   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:07:33.127367   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.2': No such file or directory
	I1204 21:07:33.127415   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 --> /var/lib/minikube/images/kube-controller-manager_v1.31.2 (26157056 bytes)
	I1204 21:07:33.127309   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:07:33.127431   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.2': No such file or directory
	I1204 21:07:33.127452   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 --> /var/lib/minikube/images/kube-apiserver_v1.31.2 (27981824 bytes)
	I1204 21:07:33.127463   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:33.127545   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:33.164482   71124 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.164532   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I1204 21:07:33.228772   71124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.242499   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I1204 21:07:33.242550   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I1204 21:07:33.242554   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.2': No such file or directory
	I1204 21:07:33.242583   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 --> /var/lib/minikube/images/kube-proxy_v1.31.2 (30228480 bytes)
	I1204 21:07:33.629094   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I1204 21:07:33.629131   71124 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:07:33.629232   71124 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.629313   71124 ssh_runner.go:195] Run: which crictl
	I1204 21:07:33.693975   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.795230   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.863790   71124 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:07:33.919193   71124 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:33.919266   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:07:33.951248   71124 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:07:33.951356   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:31.982117   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:33.983468   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:36.484958   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:32.347922   72678 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 21:07:32.348035   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:07:32.348072   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:07:32.363698   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1204 21:07:32.364094   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:07:32.364666   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:07:32.364693   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:07:32.365010   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:07:32.365218   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:32.365377   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:32.365522   72678 start.go:159] libmachine.API.Create for "embed-certs-566991" (driver="kvm2")
	I1204 21:07:32.365553   72678 client.go:168] LocalClient.Create starting
	I1204 21:07:32.365583   72678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 21:07:32.365649   72678 main.go:141] libmachine: Decoding PEM data...
	I1204 21:07:32.365677   72678 main.go:141] libmachine: Parsing certificate...
	I1204 21:07:32.365735   72678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 21:07:32.365765   72678 main.go:141] libmachine: Decoding PEM data...
	I1204 21:07:32.365786   72678 main.go:141] libmachine: Parsing certificate...
	I1204 21:07:32.365810   72678 main.go:141] libmachine: Running pre-create checks...
	I1204 21:07:32.365822   72678 main.go:141] libmachine: (embed-certs-566991) Calling .PreCreateCheck
	I1204 21:07:32.366166   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:32.366581   72678 main.go:141] libmachine: Creating machine...
	I1204 21:07:32.366597   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Create
	I1204 21:07:32.366707   72678 main.go:141] libmachine: (embed-certs-566991) Creating KVM machine...
	I1204 21:07:32.367974   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found existing default KVM network
	I1204 21:07:32.369822   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.369671   72701 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012ffb0}
	I1204 21:07:32.369845   72678 main.go:141] libmachine: (embed-certs-566991) DBG | created network xml: 
	I1204 21:07:32.369858   72678 main.go:141] libmachine: (embed-certs-566991) DBG | <network>
	I1204 21:07:32.369870   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <name>mk-embed-certs-566991</name>
	I1204 21:07:32.369882   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <dns enable='no'/>
	I1204 21:07:32.369886   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   
	I1204 21:07:32.369893   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 21:07:32.369898   72678 main.go:141] libmachine: (embed-certs-566991) DBG |     <dhcp>
	I1204 21:07:32.369908   72678 main.go:141] libmachine: (embed-certs-566991) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 21:07:32.369918   72678 main.go:141] libmachine: (embed-certs-566991) DBG |     </dhcp>
	I1204 21:07:32.369948   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   </ip>
	I1204 21:07:32.369963   72678 main.go:141] libmachine: (embed-certs-566991) DBG |   
	I1204 21:07:32.369968   72678 main.go:141] libmachine: (embed-certs-566991) DBG | </network>
	I1204 21:07:32.369972   72678 main.go:141] libmachine: (embed-certs-566991) DBG | 
	I1204 21:07:32.375270   72678 main.go:141] libmachine: (embed-certs-566991) DBG | trying to create private KVM network mk-embed-certs-566991 192.168.39.0/24...
	I1204 21:07:32.448755   72678 main.go:141] libmachine: (embed-certs-566991) DBG | private KVM network mk-embed-certs-566991 192.168.39.0/24 created
	I1204 21:07:32.448810   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.448694   72701 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.448835   72678 main.go:141] libmachine: (embed-certs-566991) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 ...
	I1204 21:07:32.448851   72678 main.go:141] libmachine: (embed-certs-566991) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 21:07:32.448876   72678 main.go:141] libmachine: (embed-certs-566991) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 21:07:32.696970   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.696810   72701 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa...
	I1204 21:07:32.817894   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.817683   72701 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/embed-certs-566991.rawdisk...
	I1204 21:07:32.817928   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Writing magic tar header
	I1204 21:07:32.817966   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 (perms=drwx------)
	I1204 21:07:32.817986   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Writing SSH key tar header
	I1204 21:07:32.817997   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 21:07:32.818013   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 21:07:32.818024   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 21:07:32.818041   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 21:07:32.818052   72678 main.go:141] libmachine: (embed-certs-566991) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 21:07:32.818062   72678 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:07:32.818088   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:32.817799   72701 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991 ...
	I1204 21:07:32.818108   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991
	I1204 21:07:32.818118   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 21:07:32.818130   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:07:32.818142   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 21:07:32.818152   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 21:07:32.818164   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home/jenkins
	I1204 21:07:32.818172   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Checking permissions on dir: /home
	I1204 21:07:32.818181   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Skipping /home - not owner
	I1204 21:07:32.819548   72678 main.go:141] libmachine: (embed-certs-566991) define libvirt domain using xml: 
	I1204 21:07:32.819574   72678 main.go:141] libmachine: (embed-certs-566991) <domain type='kvm'>
	I1204 21:07:32.819605   72678 main.go:141] libmachine: (embed-certs-566991)   <name>embed-certs-566991</name>
	I1204 21:07:32.819629   72678 main.go:141] libmachine: (embed-certs-566991)   <memory unit='MiB'>2200</memory>
	I1204 21:07:32.819639   72678 main.go:141] libmachine: (embed-certs-566991)   <vcpu>2</vcpu>
	I1204 21:07:32.819646   72678 main.go:141] libmachine: (embed-certs-566991)   <features>
	I1204 21:07:32.819659   72678 main.go:141] libmachine: (embed-certs-566991)     <acpi/>
	I1204 21:07:32.819667   72678 main.go:141] libmachine: (embed-certs-566991)     <apic/>
	I1204 21:07:32.819675   72678 main.go:141] libmachine: (embed-certs-566991)     <pae/>
	I1204 21:07:32.819692   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.819705   72678 main.go:141] libmachine: (embed-certs-566991)   </features>
	I1204 21:07:32.819713   72678 main.go:141] libmachine: (embed-certs-566991)   <cpu mode='host-passthrough'>
	I1204 21:07:32.819741   72678 main.go:141] libmachine: (embed-certs-566991)   
	I1204 21:07:32.819763   72678 main.go:141] libmachine: (embed-certs-566991)   </cpu>
	I1204 21:07:32.819775   72678 main.go:141] libmachine: (embed-certs-566991)   <os>
	I1204 21:07:32.819786   72678 main.go:141] libmachine: (embed-certs-566991)     <type>hvm</type>
	I1204 21:07:32.819796   72678 main.go:141] libmachine: (embed-certs-566991)     <boot dev='cdrom'/>
	I1204 21:07:32.819803   72678 main.go:141] libmachine: (embed-certs-566991)     <boot dev='hd'/>
	I1204 21:07:32.819815   72678 main.go:141] libmachine: (embed-certs-566991)     <bootmenu enable='no'/>
	I1204 21:07:32.819825   72678 main.go:141] libmachine: (embed-certs-566991)   </os>
	I1204 21:07:32.819834   72678 main.go:141] libmachine: (embed-certs-566991)   <devices>
	I1204 21:07:32.819846   72678 main.go:141] libmachine: (embed-certs-566991)     <disk type='file' device='cdrom'>
	I1204 21:07:32.819863   72678 main.go:141] libmachine: (embed-certs-566991)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/boot2docker.iso'/>
	I1204 21:07:32.819883   72678 main.go:141] libmachine: (embed-certs-566991)       <target dev='hdc' bus='scsi'/>
	I1204 21:07:32.819895   72678 main.go:141] libmachine: (embed-certs-566991)       <readonly/>
	I1204 21:07:32.819902   72678 main.go:141] libmachine: (embed-certs-566991)     </disk>
	I1204 21:07:32.819915   72678 main.go:141] libmachine: (embed-certs-566991)     <disk type='file' device='disk'>
	I1204 21:07:32.819928   72678 main.go:141] libmachine: (embed-certs-566991)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 21:07:32.819944   72678 main.go:141] libmachine: (embed-certs-566991)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/embed-certs-566991.rawdisk'/>
	I1204 21:07:32.819954   72678 main.go:141] libmachine: (embed-certs-566991)       <target dev='hda' bus='virtio'/>
	I1204 21:07:32.819971   72678 main.go:141] libmachine: (embed-certs-566991)     </disk>
	I1204 21:07:32.819988   72678 main.go:141] libmachine: (embed-certs-566991)     <interface type='network'>
	I1204 21:07:32.820010   72678 main.go:141] libmachine: (embed-certs-566991)       <source network='mk-embed-certs-566991'/>
	I1204 21:07:32.820025   72678 main.go:141] libmachine: (embed-certs-566991)       <model type='virtio'/>
	I1204 21:07:32.820037   72678 main.go:141] libmachine: (embed-certs-566991)     </interface>
	I1204 21:07:32.820045   72678 main.go:141] libmachine: (embed-certs-566991)     <interface type='network'>
	I1204 21:07:32.820055   72678 main.go:141] libmachine: (embed-certs-566991)       <source network='default'/>
	I1204 21:07:32.820065   72678 main.go:141] libmachine: (embed-certs-566991)       <model type='virtio'/>
	I1204 21:07:32.820072   72678 main.go:141] libmachine: (embed-certs-566991)     </interface>
	I1204 21:07:32.820081   72678 main.go:141] libmachine: (embed-certs-566991)     <serial type='pty'>
	I1204 21:07:32.820090   72678 main.go:141] libmachine: (embed-certs-566991)       <target port='0'/>
	I1204 21:07:32.820100   72678 main.go:141] libmachine: (embed-certs-566991)     </serial>
	I1204 21:07:32.820109   72678 main.go:141] libmachine: (embed-certs-566991)     <console type='pty'>
	I1204 21:07:32.820127   72678 main.go:141] libmachine: (embed-certs-566991)       <target type='serial' port='0'/>
	I1204 21:07:32.820135   72678 main.go:141] libmachine: (embed-certs-566991)     </console>
	I1204 21:07:32.820141   72678 main.go:141] libmachine: (embed-certs-566991)     <rng model='virtio'>
	I1204 21:07:32.820150   72678 main.go:141] libmachine: (embed-certs-566991)       <backend model='random'>/dev/random</backend>
	I1204 21:07:32.820159   72678 main.go:141] libmachine: (embed-certs-566991)     </rng>
	I1204 21:07:32.820167   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.820176   72678 main.go:141] libmachine: (embed-certs-566991)     
	I1204 21:07:32.820184   72678 main.go:141] libmachine: (embed-certs-566991)   </devices>
	I1204 21:07:32.820199   72678 main.go:141] libmachine: (embed-certs-566991) </domain>
	I1204 21:07:32.820223   72678 main.go:141] libmachine: (embed-certs-566991) 
	I1204 21:07:32.824986   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:01:44:20 in network default
	I1204 21:07:32.825583   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:07:32.825605   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:32.826327   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:07:32.826686   72678 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:07:32.827234   72678 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:07:32.827980   72678 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:07:34.265308   72678 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:07:34.266035   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.266516   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.266536   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.266487   72701 retry.go:31] will retry after 187.54513ms: waiting for machine to come up
	I1204 21:07:34.456171   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.456813   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.456841   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.456772   72701 retry.go:31] will retry after 265.685765ms: waiting for machine to come up
	I1204 21:07:34.724233   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:34.724780   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:34.724821   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:34.724747   72701 retry.go:31] will retry after 454.103385ms: waiting for machine to come up
	I1204 21:07:35.180435   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:35.181028   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:35.181060   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:35.180955   72701 retry.go:31] will retry after 516.483472ms: waiting for machine to come up
	I1204 21:07:35.700245   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:35.700831   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:35.700872   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:35.700795   72701 retry.go:31] will retry after 472.973695ms: waiting for machine to come up
	I1204 21:07:36.175669   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:36.176290   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:36.176344   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:36.176250   72701 retry.go:31] will retry after 661.57145ms: waiting for machine to come up
	I1204 21:07:36.839157   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:36.839774   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:36.839817   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:36.839720   72701 retry.go:31] will retry after 1.143272503s: waiting for machine to come up
	I1204 21:07:35.341378   69222 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:07:35.341860   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:35.342156   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:36.299101   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.379808646s)
	I1204 21:07:36.299138   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:07:36.299173   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:36.299107   71124 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.347725717s)
	I1204 21:07:36.299268   71124 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 21:07:36.299228   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:07:36.299296   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1204 21:07:38.482454   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.183125029s)
	I1204 21:07:38.482482   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:07:38.482517   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:38.482586   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:07:38.981888   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:41.482472   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:37.984732   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:37.985320   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:37.985354   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:37.985275   72701 retry.go:31] will retry after 1.37596792s: waiting for machine to come up
	I1204 21:07:39.362607   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:39.363398   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:39.363425   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:39.363344   72701 retry.go:31] will retry after 1.78102973s: waiting for machine to come up
	I1204 21:07:41.146454   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:41.146959   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:41.146995   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:41.146930   72701 retry.go:31] will retry after 2.214770481s: waiting for machine to come up
	I1204 21:07:40.342053   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:40.342315   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:41.653736   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.171118725s)
	I1204 21:07:41.653763   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:07:41.653791   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:41.653845   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:07:43.721839   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.067964037s)
	I1204 21:07:43.721873   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:07:43.721904   71124 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:43.721964   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:07:43.484758   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:45.984747   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:43.363242   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:43.363776   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:43.363800   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:43.363739   72701 retry.go:31] will retry after 2.236559271s: waiting for machine to come up
	I1204 21:07:45.603149   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:45.603670   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:45.603698   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:45.603615   72701 retry.go:31] will retry after 3.480575899s: waiting for machine to come up
	I1204 21:07:45.810467   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.088473048s)
	I1204 21:07:45.810501   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:07:45.810537   71124 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:45.810592   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:07:49.313452   71124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.502829566s)
	I1204 21:07:49.313497   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:07:49.313535   71124 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:49.313594   71124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:07:50.252374   71124 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:07:50.252416   71124 cache_images.go:123] Successfully loaded all cached images
	I1204 21:07:50.252423   71124 cache_images.go:92] duration metric: took 17.962155884s to LoadCachedImages
	I1204 21:07:50.252438   71124 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:07:50.252543   71124 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:07:50.252621   71124 ssh_runner.go:195] Run: crio config
	I1204 21:07:50.299202   71124 cni.go:84] Creating CNI manager for ""
	I1204 21:07:50.299226   71124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:07:50.299239   71124 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:07:50.299272   71124 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:07:50.299436   71124 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:07:50.299503   71124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:07:50.309119   71124 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 21:07:50.309177   71124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 21:07:50.317882   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 21:07:50.317887   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 21:07:50.317887   71124 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 21:07:50.317933   71124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:07:50.317964   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 21:07:50.318012   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 21:07:50.326955   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 21:07:50.326978   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 21:07:50.326986   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 21:07:50.327000   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 21:07:50.346249   71124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 21:07:50.374215   71124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 21:07:50.374268   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 21:07:48.482881   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:50.483797   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:49.086984   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:49.087480   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:07:49.087515   72678 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:07:49.087455   72701 retry.go:31] will retry after 4.339629661s: waiting for machine to come up
	I1204 21:07:50.340993   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:50.341244   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:51.013398   71124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:07:51.022174   71124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:07:51.037247   71124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:07:51.051846   71124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:07:51.066590   71124 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:07:51.070007   71124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:51.080925   71124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:07:51.208176   71124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:07:51.226640   71124 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:07:51.226665   71124 certs.go:194] generating shared ca certs ...
	I1204 21:07:51.226686   71124 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.226862   71124 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:07:51.226930   71124 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:07:51.226945   71124 certs.go:256] generating profile certs ...
	I1204 21:07:51.227027   71124 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:07:51.227046   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt with IP's: []
	I1204 21:07:51.479401   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt ...
	I1204 21:07:51.479446   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt: {Name:mkbe81a31e9e5da764b1cfb4f53ad7c67fd65db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.479652   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key ...
	I1204 21:07:51.479667   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key: {Name:mk7319044e22c497ff66a13a403c8664c77accf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.479778   71124 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:07:51.479801   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.174]
	I1204 21:07:51.661031   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 ...
	I1204 21:07:51.661058   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058: {Name:mk5798be682082b116b588db848be2f29f6dbb0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.661250   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058 ...
	I1204 21:07:51.661277   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058: {Name:mka586f5662a3d6038b77f39e21ce05c3b5155a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.661391   71124 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt.dbe51058 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt
	I1204 21:07:51.661525   71124 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key
	I1204 21:07:51.661615   71124 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:07:51.661636   71124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt with IP's: []
	I1204 21:07:51.755745   71124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt ...
	I1204 21:07:51.755769   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt: {Name:mk225e8ac907a0feed5a13a1e17fde3e1f0bb7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.755930   71124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key ...
	I1204 21:07:51.755952   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key: {Name:mk15b6dfbb39e32e2ef3a927c680b599c043b8b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:07:51.756162   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:07:51.756215   71124 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:07:51.756230   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:07:51.756259   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:07:51.756286   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:07:51.756328   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:07:51.756379   71124 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:07:51.756983   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:07:51.780146   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:07:51.803093   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:07:51.824025   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:07:51.844884   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:07:51.865816   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:07:51.886448   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:07:51.909749   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:07:51.939145   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:07:51.960052   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:07:51.981998   71124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:07:52.003624   71124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:07:52.018645   71124 ssh_runner.go:195] Run: openssl version
	I1204 21:07:52.024109   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:07:52.033956   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.038114   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.038171   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:07:52.043574   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:07:52.055000   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:07:52.066073   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.070271   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.070319   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:07:52.075616   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:07:52.086686   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:07:52.097655   71124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.101844   71124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.101888   71124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:07:52.107298   71124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:07:52.118243   71124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:07:52.122043   71124 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:07:52.122093   71124 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:07:52.122176   71124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:07:52.122215   71124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:07:52.160870   71124 cri.go:89] found id: ""
	I1204 21:07:52.160939   71124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:07:52.171644   71124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:07:52.181884   71124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:07:52.191851   71124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:07:52.191868   71124 kubeadm.go:157] found existing configuration files:
	
	I1204 21:07:52.191902   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:07:52.201357   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:07:52.201394   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:07:52.211036   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:07:52.220393   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:07:52.220428   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:07:52.230114   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:07:52.239405   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:07:52.239450   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:07:52.250007   71124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:07:52.260371   71124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:07:52.260407   71124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:07:52.271483   71124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:07:52.434457   71124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:07:52.982843   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:55.481525   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:53.428465   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.429016   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.429042   72678 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:07:53.429055   72678 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:07:53.429402   72678 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991
	I1204 21:07:53.505112   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:07:53.505140   72678 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:07:53.505151   72678 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:07:53.507624   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.507962   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.507992   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.508260   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:07:53.508286   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:07:53.508325   72678 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:07:53.508337   72678 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:07:53.508364   72678 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:07:53.635681   72678 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:07:53.635958   72678 main.go:141] libmachine: (embed-certs-566991) KVM machine creation complete!
	I1204 21:07:53.636287   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:53.636838   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:53.637018   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:53.637179   72678 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 21:07:53.637194   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:07:53.638668   72678 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 21:07:53.638679   72678 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 21:07:53.638687   72678 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 21:07:53.638693   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.640789   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.641137   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.641169   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.641265   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.641442   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.641603   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.641748   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.641905   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.642141   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.642165   72678 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 21:07:53.746627   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:07:53.746656   72678 main.go:141] libmachine: Detecting the provisioner...
	I1204 21:07:53.746667   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.749446   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.749830   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.749859   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.750001   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.750239   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.750389   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.750540   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.750705   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.750914   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.750931   72678 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 21:07:53.859467   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 21:07:53.859536   72678 main.go:141] libmachine: found compatible host: buildroot
	I1204 21:07:53.859544   72678 main.go:141] libmachine: Provisioning with buildroot...
	I1204 21:07:53.859554   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:53.859819   72678 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:07:53.859848   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:53.860030   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.862726   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.863128   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.863156   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.863305   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.863489   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.863645   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.863762   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.863911   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.864114   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.864128   72678 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:07:53.986418   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:07:53.986449   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:53.989296   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.989680   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:53.989710   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:53.989838   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:53.990019   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.990193   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:53.990355   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:53.990505   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:53.990660   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:53.990676   72678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:07:54.108629   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:07:54.108658   72678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:07:54.108683   72678 buildroot.go:174] setting up certificates
	I1204 21:07:54.108695   72678 provision.go:84] configureAuth start
	I1204 21:07:54.108709   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:07:54.108987   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:54.111741   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.112066   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.112093   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.112254   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.114385   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.114714   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.114740   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.114841   72678 provision.go:143] copyHostCerts
	I1204 21:07:54.114915   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:07:54.114927   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:07:54.115002   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:07:54.115109   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:07:54.115122   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:07:54.115151   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:07:54.115222   72678 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:07:54.115232   72678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:07:54.115256   72678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:07:54.115342   72678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:07:54.638161   72678 provision.go:177] copyRemoteCerts
	I1204 21:07:54.638244   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:07:54.638270   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.641133   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.641549   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.641586   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.641874   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:54.642154   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.642373   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:54.642571   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:54.734022   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:07:54.757616   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:07:54.783521   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:07:54.806397   72678 provision.go:87] duration metric: took 697.687725ms to configureAuth
	I1204 21:07:54.806421   72678 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:07:54.806568   72678 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:07:54.806674   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:54.809088   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.809457   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:54.809492   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:54.809670   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:54.809894   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.810063   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:54.810228   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:54.810371   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:54.810569   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:54.810590   72678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:07:55.032455   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:07:55.032484   72678 main.go:141] libmachine: Checking connection to Docker...
	I1204 21:07:55.032492   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetURL
	I1204 21:07:55.033731   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Using libvirt version 6000000
	I1204 21:07:55.035963   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.036432   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.036464   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.036655   72678 main.go:141] libmachine: Docker is up and running!
	I1204 21:07:55.036672   72678 main.go:141] libmachine: Reticulating splines...
	I1204 21:07:55.036680   72678 client.go:171] duration metric: took 22.671119314s to LocalClient.Create
	I1204 21:07:55.036708   72678 start.go:167] duration metric: took 22.671187588s to libmachine.API.Create "embed-certs-566991"
	I1204 21:07:55.036720   72678 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:07:55.036734   72678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:07:55.036754   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.036973   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:07:55.037004   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.039423   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.039802   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.039828   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.039972   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.040157   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.040351   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.040492   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.128469   72678 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:07:55.132396   72678 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:07:55.132420   72678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:07:55.132489   72678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:07:55.132579   72678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:07:55.132677   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:07:55.143703   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:07:55.168173   72678 start.go:296] duration metric: took 131.441291ms for postStartSetup
	I1204 21:07:55.168218   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:07:55.168871   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:55.171423   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.171792   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.171820   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.172044   72678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:07:55.172195   72678 start.go:128] duration metric: took 22.825936049s to createHost
	I1204 21:07:55.172220   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.174299   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.174600   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.174630   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.174765   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.174931   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.175076   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.175212   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.175391   72678 main.go:141] libmachine: Using SSH client type: native
	I1204 21:07:55.175560   72678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:07:55.175573   72678 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:07:55.284235   72678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346475.253087952
	
	I1204 21:07:55.284263   72678 fix.go:216] guest clock: 1733346475.253087952
	I1204 21:07:55.284271   72678 fix.go:229] Guest: 2024-12-04 21:07:55.253087952 +0000 UTC Remote: 2024-12-04 21:07:55.172208227 +0000 UTC m=+22.938792705 (delta=80.879725ms)
	I1204 21:07:55.284307   72678 fix.go:200] guest clock delta is within tolerance: 80.879725ms
	I1204 21:07:55.284313   72678 start.go:83] releasing machines lock for "embed-certs-566991", held for 22.93815022s
	I1204 21:07:55.284331   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.284585   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:55.287505   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.287883   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.287912   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.288072   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288611   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288792   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:07:55.288885   72678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:07:55.288927   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.289059   72678 ssh_runner.go:195] Run: cat /version.json
	I1204 21:07:55.289087   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:07:55.291824   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.291985   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292289   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.292313   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292489   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.292492   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:55.292533   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:55.292636   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:07:55.292717   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.292787   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:07:55.292853   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.292913   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:07:55.292978   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.293062   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:07:55.396099   72678 ssh_runner.go:195] Run: systemctl --version
	I1204 21:07:55.402972   72678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:07:55.562658   72678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:07:55.568474   72678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:07:55.568540   72678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:07:55.584647   72678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:07:55.584673   72678 start.go:495] detecting cgroup driver to use...
	I1204 21:07:55.584740   72678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:07:55.600252   72678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:07:55.613743   72678 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:07:55.613786   72678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:07:55.626509   72678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:07:55.639200   72678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:07:55.778462   72678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:07:55.945455   72678 docker.go:233] disabling docker service ...
	I1204 21:07:55.945573   72678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:07:55.961181   72678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:07:55.975238   72678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:07:56.093217   72678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:07:56.199474   72678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:07:56.213257   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:07:56.229865   72678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:07:56.229928   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.239401   72678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:07:56.239471   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.248741   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.257662   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.266909   72678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:07:56.276574   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.286881   72678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.302337   72678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:07:56.311648   72678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:07:56.320482   72678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:07:56.320533   72678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:07:56.331964   72678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:07:56.340439   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:07:56.458189   72678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:07:56.544431   72678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:07:56.544512   72678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:07:56.548969   72678 start.go:563] Will wait 60s for crictl version
	I1204 21:07:56.549030   72678 ssh_runner.go:195] Run: which crictl
	I1204 21:07:56.552420   72678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:07:56.596847   72678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:07:56.596931   72678 ssh_runner.go:195] Run: crio --version
	I1204 21:07:56.623175   72678 ssh_runner.go:195] Run: crio --version
	I1204 21:07:56.653217   72678 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:07:56.654549   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:07:56.657357   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:56.657800   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:07:56.657830   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:07:56.658045   72678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:07:56.662600   72678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:07:56.675067   72678 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:07:56.675194   72678 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:07:56.675243   72678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:07:56.705386   72678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:07:56.705456   72678 ssh_runner.go:195] Run: which lz4
	I1204 21:07:56.708972   72678 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:07:56.712557   72678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:07:56.712589   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:07:57.486051   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:59.982920   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:07:58.031687   72678 crio.go:462] duration metric: took 1.322755748s to copy over tarball
	I1204 21:07:58.031816   72678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:08:00.221821   72678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189973077s)
	I1204 21:08:00.221852   72678 crio.go:469] duration metric: took 2.190088595s to extract the tarball
	I1204 21:08:00.221861   72678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:08:00.258969   72678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:08:00.309369   72678 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:08:00.309392   72678 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:08:00.309401   72678 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:08:00.309506   72678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:08:00.309583   72678 ssh_runner.go:195] Run: crio config
	I1204 21:08:00.364859   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:08:00.364884   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:00.364894   72678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:08:00.364913   72678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:08:00.365045   72678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:08:00.365107   72678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:08:00.378637   72678 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:08:00.378701   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:08:00.390059   72678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:08:00.407238   72678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:08:00.425098   72678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:08:00.440975   72678 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:08:00.444968   72678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:08:00.458103   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:00.593263   72678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:00.610519   72678 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:08:00.610605   72678 certs.go:194] generating shared ca certs ...
	I1204 21:08:00.610636   72678 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.610828   72678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:08:00.610907   72678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:08:00.610925   72678 certs.go:256] generating profile certs ...
	I1204 21:08:00.611007   72678 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:08:00.611034   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt with IP's: []
	I1204 21:08:00.829426   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt ...
	I1204 21:08:00.829463   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.crt: {Name:mkcde9722eb617fad816565ed778f23f201f6fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.829687   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key ...
	I1204 21:08:00.829710   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key: {Name:mked532e86df206ae013a08249dd6d7514903c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:00.829852   72678 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:08:00.829869   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.82]
	I1204 21:08:01.094625   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c ...
	I1204 21:08:01.094659   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c: {Name:mk368ae9053be3a68b8c5ccbbe266243b17fe381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.094859   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c ...
	I1204 21:08:01.094872   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c: {Name:mk98e5dedb9e0242cb590a2906638cd85caab4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.094948   72678 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt.ba71006c -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt
	I1204 21:08:01.095016   72678 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key
	I1204 21:08:01.095067   72678 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:08:01.095082   72678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt with IP's: []
	I1204 21:08:01.406626   72678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt ...
	I1204 21:08:01.406654   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt: {Name:mk45bc05002f35125be02906e3a69c60af6aa69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.406807   72678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key ...
	I1204 21:08:01.406819   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key: {Name:mkedcc3254b23883b0b9169eed87c2dd55a2f463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:01.406997   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:08:01.407035   72678 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:08:01.407042   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:08:01.407065   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:08:01.407088   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:08:01.407109   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:08:01.407172   72678 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:08:01.407742   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:08:01.436386   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:08:01.483134   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:08:01.514729   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:08:01.537462   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:08:01.559233   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:08:01.581383   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:08:01.603979   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:08:01.625787   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:08:01.648024   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:08:01.669938   72678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:08:01.691584   72678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:08:01.706149   72678 ssh_runner.go:195] Run: openssl version
	I1204 21:08:01.711595   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:08:01.721106   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.725176   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.725236   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:08:01.730590   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:08:01.740950   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:08:01.751485   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.755757   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.755813   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:08:01.761084   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:08:01.771275   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:08:01.781543   72678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.785774   72678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.785844   72678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:08:01.791134   72678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:08:01.801528   72678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:08:01.805262   72678 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:08:01.805340   72678 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:08:01.805421   72678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:08:01.805468   72678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:08:01.842287   72678 cri.go:89] found id: ""
	I1204 21:08:01.842356   72678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:08:01.852300   72678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:08:01.862647   72678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:08:01.872102   72678 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:08:01.872126   72678 kubeadm.go:157] found existing configuration files:
	
	I1204 21:08:01.872172   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:08:01.880788   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:08:01.880840   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:08:01.890064   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:08:01.899108   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:08:01.899166   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:08:01.908183   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:08:01.917796   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:08:01.917866   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:08:01.927037   72678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:08:01.936099   72678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:08:01.936163   72678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:08:01.944827   72678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:08:02.152043   72678 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:08:05.806658   71124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:05.806723   71124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:05.806808   71124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:05.806957   71124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:05.807053   71124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:05.807107   71124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:05.808766   71124 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:05.808862   71124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:05.808951   71124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:05.809047   71124 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:08:05.809148   71124 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:08:05.809259   71124 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:08:05.809337   71124 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:08:05.809416   71124 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:08:05.809574   71124 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-534766] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1204 21:08:05.809627   71124 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:08:05.809762   71124 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-534766] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1204 21:08:05.809842   71124 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:08:05.809897   71124 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:08:05.809936   71124 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:08:05.809984   71124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:05.810027   71124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:05.810080   71124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:05.810150   71124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:05.810241   71124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:05.810321   71124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:05.810432   71124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:05.810515   71124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:05.812020   71124 out.go:235]   - Booting up control plane ...
	I1204 21:08:05.812111   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:05.812217   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:05.812310   71124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:05.812417   71124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:05.812491   71124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:05.812530   71124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:05.812637   71124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:05.812751   71124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:05.812812   71124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002328443s
	I1204 21:08:05.812877   71124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:05.812957   71124 kubeadm.go:310] [api-check] The API server is healthy after 7.501948033s
	I1204 21:08:05.813084   71124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:05.813249   71124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:05.813340   71124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:05.813518   71124 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:05.813572   71124 kubeadm.go:310] [bootstrap-token] Using token: 5sdsjw.bvoxkeqlpemcqy5p
	I1204 21:08:05.814883   71124 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:05.814980   71124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:05.815065   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:05.815192   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:05.815317   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:05.815443   71124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:05.815526   71124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:05.815636   71124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:05.815700   71124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:05.815748   71124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:05.815754   71124 kubeadm.go:310] 
	I1204 21:08:05.815812   71124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:05.815821   71124 kubeadm.go:310] 
	I1204 21:08:05.815913   71124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:05.815929   71124 kubeadm.go:310] 
	I1204 21:08:05.815969   71124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:05.816046   71124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:05.816118   71124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:05.816144   71124 kubeadm.go:310] 
	I1204 21:08:05.816205   71124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:05.816213   71124 kubeadm.go:310] 
	I1204 21:08:05.816264   71124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:05.816271   71124 kubeadm.go:310] 
	I1204 21:08:05.816333   71124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:05.816431   71124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:05.816525   71124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:05.816534   71124 kubeadm.go:310] 
	I1204 21:08:05.816631   71124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:05.816741   71124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:05.816755   71124 kubeadm.go:310] 
	I1204 21:08:05.816873   71124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5sdsjw.bvoxkeqlpemcqy5p \
	I1204 21:08:05.817016   71124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:05.817049   71124 kubeadm.go:310] 	--control-plane 
	I1204 21:08:05.817058   71124 kubeadm.go:310] 
	I1204 21:08:05.817153   71124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:05.817161   71124 kubeadm.go:310] 
	I1204 21:08:05.817237   71124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5sdsjw.bvoxkeqlpemcqy5p \
	I1204 21:08:05.817333   71124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:05.817343   71124 cni.go:84] Creating CNI manager for ""
	I1204 21:08:05.817348   71124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:05.818866   71124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:02.307695   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:04.483754   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:05.820114   71124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:05.830079   71124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:05.852330   71124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:05.852413   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:05.852508   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_08_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:08:05.890450   71124 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:05.974841   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:06.475480   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:06.975296   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:07.475531   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:07.975657   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:08.475843   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:08.975210   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:09.475478   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:09.975481   71124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:10.101313   71124 kubeadm.go:1113] duration metric: took 4.248969972s to wait for elevateKubeSystemPrivileges
	I1204 21:08:10.101355   71124 kubeadm.go:394] duration metric: took 17.979262665s to StartCluster
	I1204 21:08:10.101378   71124 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:10.101479   71124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:10.102864   71124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:10.103109   71124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 21:08:10.103120   71124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:10.103211   71124 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:10.103313   71124 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:08:10.103323   71124 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:08:10.103352   71124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:08:10.103328   71124 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:10.103332   71124 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	I1204 21:08:10.103446   71124 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:08:10.103885   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.103877   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.103934   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.103948   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.104983   71124 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:10.106480   71124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:10.123785   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I1204 21:08:10.123883   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1204 21:08:10.124234   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.124374   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.124861   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.124882   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.125030   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.125056   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.125341   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.125532   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.125585   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.126142   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.126182   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.129193   71124 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	I1204 21:08:10.129238   71124 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:08:10.129588   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.129625   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.147526   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1204 21:08:10.148155   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.148810   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.148836   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.149351   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.149615   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.149869   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I1204 21:08:10.150446   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.150915   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.150937   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.151260   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.151885   71124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:10.151928   71124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:10.152627   71124 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:08:10.154482   71124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:08:10.155995   71124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:10.156019   71124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:08:10.156039   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:08:10.159786   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.160295   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:08:10.160318   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.160502   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:08:10.160663   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:08:10.160762   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:08:10.160853   71124 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:08:10.170522   71124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I1204 21:08:10.171135   71124 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:10.171748   71124 main.go:141] libmachine: Using API Version  1
	I1204 21:08:10.171777   71124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:10.172306   71124 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:10.172734   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:08:10.174732   71124 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:08:10.174990   71124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:10.175009   71124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:08:10.175027   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:08:10.181811   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.182352   71124 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:08:10.182375   71124 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:08:10.182524   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:08:10.182650   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:08:10.182745   71124 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:08:10.182842   71124 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:08:10.375812   71124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:10.375873   71124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 21:08:10.533947   71124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:10.544659   71124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:06.983355   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:09.481478   59256 pod_ready.go:103] pod "kube-proxy-dbc82" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:10.975489   59256 pod_ready.go:82] duration metric: took 4m0.000079111s for pod "kube-proxy-dbc82" in "kube-system" namespace to be "Ready" ...
	E1204 21:08:10.975526   59256 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-dbc82" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:08:10.975573   59256 pod_ready.go:39] duration metric: took 4m8.550625089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:10.975619   59256 kubeadm.go:597] duration metric: took 4m16.380476362s to restartPrimaryControlPlane
	W1204 21:08:10.975675   59256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:08:10.975709   59256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:08:11.312337   71124 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:08:11.312836   71124 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1204 21:08:11.358308   71124 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:08:11.358336   71124 node_ready.go:38] duration metric: took 45.964875ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:08:11.358349   71124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:11.388209   71124 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:11.730639   71124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.196645778s)
	I1204 21:08:11.730680   71124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18598582s)
	I1204 21:08:11.730713   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.730717   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.730728   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.730729   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731059   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.731085   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.731124   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731138   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731156   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.731166   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731177   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731189   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731197   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.731207   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.731508   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731522   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731616   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.731629   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.731646   71124 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:08:11.765720   71124 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:11.765747   71124 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:08:11.766098   71124 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:11.766117   71124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:11.767695   71124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1204 21:08:11.970711   72678 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:11.970779   72678 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:11.970869   72678 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:11.970973   72678 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:11.971050   72678 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:11.971103   72678 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:11.972789   72678 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:11.972883   72678 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:11.972975   72678 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:11.973053   72678 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:08:11.973145   72678 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:08:11.973229   72678 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:08:11.973290   72678 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:08:11.973372   72678 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:08:11.973524   72678 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-566991 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1204 21:08:11.973603   72678 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:08:11.973776   72678 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-566991 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1204 21:08:11.973870   72678 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:08:11.973983   72678 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:08:11.974053   72678 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:08:11.974136   72678 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:11.974188   72678 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:11.974280   72678 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:11.974357   72678 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:11.974448   72678 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:11.974528   72678 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:11.974624   72678 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:11.974722   72678 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:11.976999   72678 out.go:235]   - Booting up control plane ...
	I1204 21:08:11.977120   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:11.977230   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:11.977318   72678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:11.977449   72678 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:11.977566   72678 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:11.977625   72678 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:11.977788   72678 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:11.977923   72678 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:11.978002   72678 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.157034ms
	I1204 21:08:11.978094   72678 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:11.978169   72678 kubeadm.go:310] [api-check] The API server is healthy after 5.001257785s
	I1204 21:08:11.978306   72678 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:11.978469   72678 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:11.978540   72678 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:11.978690   72678 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-566991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:11.978735   72678 kubeadm.go:310] [bootstrap-token] Using token: gnzm5j.1tnplqkzht748ruw
	I1204 21:08:11.980124   72678 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:11.980237   72678 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:11.980320   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:11.980463   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:11.980662   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:11.980798   72678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:11.980950   72678 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:11.981116   72678 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:11.981165   72678 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:11.981225   72678 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:11.981243   72678 kubeadm.go:310] 
	I1204 21:08:11.981331   72678 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:11.981349   72678 kubeadm.go:310] 
	I1204 21:08:11.981480   72678 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:11.981491   72678 kubeadm.go:310] 
	I1204 21:08:11.981512   72678 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:11.981560   72678 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:11.981618   72678 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:11.981627   72678 kubeadm.go:310] 
	I1204 21:08:11.981700   72678 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:11.981709   72678 kubeadm.go:310] 
	I1204 21:08:11.981780   72678 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:11.981788   72678 kubeadm.go:310] 
	I1204 21:08:11.981864   72678 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:11.981969   72678 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:11.982061   72678 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:11.982073   72678 kubeadm.go:310] 
	I1204 21:08:11.982208   72678 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:11.982304   72678 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:11.982321   72678 kubeadm.go:310] 
	I1204 21:08:11.982425   72678 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gnzm5j.1tnplqkzht748ruw \
	I1204 21:08:11.982565   72678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:11.982598   72678 kubeadm.go:310] 	--control-plane 
	I1204 21:08:11.982615   72678 kubeadm.go:310] 
	I1204 21:08:11.982753   72678 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:11.982773   72678 kubeadm.go:310] 
	I1204 21:08:11.982887   72678 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gnzm5j.1tnplqkzht748ruw \
	I1204 21:08:11.983045   72678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:11.983059   72678 cni.go:84] Creating CNI manager for ""
	I1204 21:08:11.983067   72678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:11.984514   72678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:11.985605   72678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:11.997506   72678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:12.020287   72678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:12.020382   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:12.020478   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-566991 minikube.k8s.io/updated_at=2024_12_04T21_08_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=embed-certs-566991 minikube.k8s.io/primary=true
	I1204 21:08:10.340409   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:08:10.340653   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:08:11.769095   71124 addons.go:510] duration metric: took 1.665888351s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1204 21:08:11.829492   71124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-534766" context rescaled to 1 replicas
	I1204 21:08:13.395202   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:12.282472   72678 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:12.282643   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:12.782677   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:13.283401   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:13.783449   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:14.283601   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:14.783053   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:15.283662   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:15.783430   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:16.282965   72678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:16.378013   72678 kubeadm.go:1113] duration metric: took 4.357680671s to wait for elevateKubeSystemPrivileges
	I1204 21:08:16.378093   72678 kubeadm.go:394] duration metric: took 14.572753854s to StartCluster
	I1204 21:08:16.378117   72678 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:16.378206   72678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:16.380404   72678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:16.380708   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 21:08:16.380704   72678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:16.380732   72678 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:16.380819   72678 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:08:16.380887   72678 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	I1204 21:08:16.380931   72678 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:08:16.380939   72678 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:16.380830   72678 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:08:16.381008   72678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:08:16.381419   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.381474   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.381489   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.381525   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.383297   72678 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:16.384666   72678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:16.399427   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I1204 21:08:16.399666   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I1204 21:08:16.399962   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.400084   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.400579   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.400607   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.400700   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.400722   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.400950   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.401121   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.401299   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.401552   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.401598   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.405211   72678 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	I1204 21:08:16.405292   72678 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:08:16.406038   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.406122   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.418432   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I1204 21:08:16.419079   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.419821   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.419841   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.420212   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.420435   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.422352   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:08:16.425112   72678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:08:16.426069   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I1204 21:08:16.426340   72678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:16.426356   72678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:08:16.426373   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:08:16.426450   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.427255   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.427283   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.427748   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.428381   72678 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:08:16.428424   72678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:08:16.429483   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.430011   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:08:16.430036   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.430185   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:08:16.430362   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:08:16.430553   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:08:16.430711   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:08:16.445952   72678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I1204 21:08:16.446471   72678 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:08:16.446968   72678 main.go:141] libmachine: Using API Version  1
	I1204 21:08:16.446991   72678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:08:16.447416   72678 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:08:16.447784   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:08:16.449656   72678 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:08:16.449846   72678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:16.449868   72678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:08:16.449886   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:08:16.452557   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.452849   72678 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:08:16.452867   72678 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:08:16.452998   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:08:16.453117   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:08:16.453207   72678 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:08:16.453292   72678 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:08:16.679677   72678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:16.679743   72678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 21:08:16.800987   72678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:08:16.916178   72678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:08:17.238797   72678 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 21:08:17.238994   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.239021   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.239400   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.239421   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.239425   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.239442   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.239458   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.240115   72678 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:08:17.240414   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.240426   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.240442   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.256226   72678 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:08:17.256249   72678 node_ready.go:38] duration metric: took 16.113392ms for node "embed-certs-566991" to be "Ready" ...
	I1204 21:08:17.256258   72678 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:17.259096   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.259124   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.259451   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.259474   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.259515   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.271740   72678 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:17.757039   72678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-566991" context rescaled to 1 replicas
	I1204 21:08:17.808303   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.808336   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.808628   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.808669   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.808676   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.808685   72678 main.go:141] libmachine: Making call to close driver server
	I1204 21:08:17.808693   72678 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:08:17.808938   72678 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:08:17.808975   72678 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:08:17.808989   72678 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:08:17.810921   72678 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 21:08:15.894264   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:17.897138   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:20.395536   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:17.812308   72678 addons.go:510] duration metric: took 1.431577708s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 21:08:19.277994   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:21.794982   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:22.391369   71124 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g7zhj" not found
	I1204 21:08:22.391429   71124 pod_ready.go:82] duration metric: took 11.003188512s for pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace to be "Ready" ...
	E1204 21:08:22.391441   71124 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g7zhj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g7zhj" not found
	I1204 21:08:22.391448   71124 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:24.397257   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:24.277468   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:26.778854   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:27.188853   59256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.213112718s)
	I1204 21:08:27.188938   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:27.203898   59256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:08:27.213582   59256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:08:27.223300   59256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:08:27.223321   59256 kubeadm.go:157] found existing configuration files:
	
	I1204 21:08:27.223358   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:08:27.232202   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:08:27.232276   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:08:27.241363   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:08:27.250252   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:08:27.250303   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:08:27.259133   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:08:27.268175   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:08:27.268239   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:08:27.277670   59256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:08:27.286990   59256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:08:27.287048   59256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:08:27.296403   59256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:08:27.340569   59256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:08:27.340625   59256 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:27.445878   59256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:27.446054   59256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:27.446202   59256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:08:27.454808   59256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:27.456687   59256 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:27.456793   59256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:27.456884   59256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:27.457023   59256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:08:27.457128   59256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:08:27.457263   59256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:08:27.457352   59256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:08:27.457451   59256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:08:27.457536   59256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:08:27.457668   59256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:08:27.457771   59256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:08:27.457825   59256 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:08:27.457915   59256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:27.671215   59256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:27.897750   59256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:08:28.005069   59256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:28.099087   59256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:28.311253   59256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:28.311880   59256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:28.314578   59256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:26.397414   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:28.399521   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:28.316533   59256 out.go:235]   - Booting up control plane ...
	I1204 21:08:28.316662   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:28.316780   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:28.317037   59256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:28.335526   59256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:28.341982   59256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:28.342046   59256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:28.481518   59256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:08:28.481663   59256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:08:29.483448   59256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001974582s
	I1204 21:08:29.483539   59256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:08:29.277617   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:31.279586   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:33.986377   59256 kubeadm.go:310] [api-check] The API server is healthy after 4.502987113s
	I1204 21:08:34.005118   59256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:08:34.023455   59256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:08:34.054044   59256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:08:34.054233   59256 kubeadm.go:310] [mark-control-plane] Marking the node pause-998149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:08:34.066313   59256 kubeadm.go:310] [bootstrap-token] Using token: xc4tcq.dtygxz0gdfvy2txn
	I1204 21:08:34.067829   59256 out.go:235]   - Configuring RBAC rules ...
	I1204 21:08:34.067956   59256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:08:34.073198   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:08:34.083524   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:08:34.087117   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:08:34.092501   59256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:08:34.097860   59256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:08:34.395239   59256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:08:34.820061   59256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:08:35.395146   59256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:08:35.395190   59256 kubeadm.go:310] 
	I1204 21:08:35.395284   59256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:08:35.395297   59256 kubeadm.go:310] 
	I1204 21:08:35.395425   59256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:08:35.395443   59256 kubeadm.go:310] 
	I1204 21:08:35.395493   59256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:08:35.395588   59256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:08:35.395670   59256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:08:35.395685   59256 kubeadm.go:310] 
	I1204 21:08:35.395764   59256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:08:35.395774   59256 kubeadm.go:310] 
	I1204 21:08:35.395857   59256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:08:35.395874   59256 kubeadm.go:310] 
	I1204 21:08:35.395942   59256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:08:35.396052   59256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:08:35.396160   59256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:08:35.396172   59256 kubeadm.go:310] 
	I1204 21:08:35.396287   59256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:08:35.396385   59256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:08:35.396396   59256 kubeadm.go:310] 
	I1204 21:08:35.396502   59256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xc4tcq.dtygxz0gdfvy2txn \
	I1204 21:08:35.396626   59256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:08:35.396662   59256 kubeadm.go:310] 	--control-plane 
	I1204 21:08:35.396670   59256 kubeadm.go:310] 
	I1204 21:08:35.396773   59256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:08:35.396785   59256 kubeadm.go:310] 
	I1204 21:08:35.396888   59256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xc4tcq.dtygxz0gdfvy2txn \
	I1204 21:08:35.397034   59256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:08:35.398051   59256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:08:35.398135   59256 cni.go:84] Creating CNI manager for ""
	I1204 21:08:35.398152   59256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:08:35.399886   59256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:08:30.898268   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:33.398208   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:35.398254   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:35.401133   59256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:08:35.412531   59256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:08:35.431751   59256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:08:35.431842   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:35.431849   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-998149 minikube.k8s.io/updated_at=2024_12_04T21_08_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=pause-998149 minikube.k8s.io/primary=true
	I1204 21:08:35.567045   59256 ops.go:34] apiserver oom_adj: -16
	I1204 21:08:35.585450   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:36.085573   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:33.778383   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:36.276959   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:36.585830   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:37.085688   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:37.585500   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:38.086326   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:38.586132   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.086141   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.585741   59256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:08:39.690656   59256 kubeadm.go:1113] duration metric: took 4.258882564s to wait for elevateKubeSystemPrivileges
	I1204 21:08:39.690699   59256 kubeadm.go:394] duration metric: took 4m45.203022063s to StartCluster
	I1204 21:08:39.690716   59256 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:39.690831   59256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:08:39.692398   59256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:08:39.692660   59256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:08:39.692727   59256 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:08:39.692909   59256 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:08:39.694779   59256 out.go:177] * Verifying Kubernetes components...
	I1204 21:08:39.694782   59256 out.go:177] * Enabled addons: 
	I1204 21:08:37.398430   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:39.400020   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:39.696270   59256 addons.go:510] duration metric: took 3.552911ms for enable addons: enabled=[]
	I1204 21:08:39.696294   59256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:08:39.857232   59256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:08:39.877674   59256 node_ready.go:35] waiting up to 6m0s for node "pause-998149" to be "Ready" ...
	I1204 21:08:39.898413   59256 node_ready.go:49] node "pause-998149" has status "Ready":"True"
	I1204 21:08:39.898437   59256 node_ready.go:38] duration metric: took 20.724831ms for node "pause-998149" to be "Ready" ...
	I1204 21:08:39.898446   59256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:39.908341   59256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:38.278077   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:40.278513   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:41.898995   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.400027   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:41.916228   59256 pod_ready.go:103] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.416909   59256 pod_ready.go:103] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:45.920521   59256 pod_ready.go:93] pod "etcd-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:45.920546   59256 pod_ready.go:82] duration metric: took 6.012180216s for pod "etcd-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:45.920557   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:42.778432   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:44.779362   72678 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:47.926658   59256 pod_ready.go:103] pod "kube-apiserver-pause-998149" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:48.427851   59256 pod_ready.go:93] pod "kube-apiserver-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.427877   59256 pod_ready.go:82] duration metric: took 2.507313013s for pod "kube-apiserver-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.427889   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.934599   59256 pod_ready.go:93] pod "kube-controller-manager-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.934624   59256 pod_ready.go:82] duration metric: took 506.730073ms for pod "kube-controller-manager-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.934635   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7pttk" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.939665   59256 pod_ready.go:93] pod "kube-proxy-7pttk" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.939688   59256 pod_ready.go:82] duration metric: took 5.04711ms for pod "kube-proxy-7pttk" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.939697   59256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.943862   59256 pod_ready.go:93] pod "kube-scheduler-pause-998149" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.943882   59256 pod_ready.go:82] duration metric: took 4.179617ms for pod "kube-scheduler-pause-998149" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.943889   59256 pod_ready.go:39] duration metric: took 9.045432923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:48.943905   59256 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:08:48.943951   59256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:08:48.958885   59256 api_server.go:72] duration metric: took 9.266187782s to wait for apiserver process to appear ...
	I1204 21:08:48.958916   59256 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:08:48.958939   59256 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I1204 21:08:48.963978   59256 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I1204 21:08:48.965158   59256 api_server.go:141] control plane version: v1.31.2
	I1204 21:08:48.965186   59256 api_server.go:131] duration metric: took 6.261921ms to wait for apiserver health ...
	I1204 21:08:48.965197   59256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:08:48.973273   59256 system_pods.go:59] 7 kube-system pods found
	I1204 21:08:48.973310   59256 system_pods.go:61] "coredns-7c65d6cfc9-26bcn" [c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6] Running
	I1204 21:08:48.973319   59256 system_pods.go:61] "coredns-7c65d6cfc9-kfdvp" [935bb8b3-28ee-47d5-a525-b5bc7d882a63] Running
	I1204 21:08:48.973325   59256 system_pods.go:61] "etcd-pause-998149" [e836a3f5-4def-4413-a47c-d61eb18454e0] Running
	I1204 21:08:48.973330   59256 system_pods.go:61] "kube-apiserver-pause-998149" [51be41ea-85c5-4ef3-b980-d29c0ff81e50] Running
	I1204 21:08:48.973336   59256 system_pods.go:61] "kube-controller-manager-pause-998149" [10333131-5aab-4acb-8cf6-474a76909b71] Running
	I1204 21:08:48.973341   59256 system_pods.go:61] "kube-proxy-7pttk" [b9aa9037-580d-4c10-ba44-1e3925516a2a] Running
	I1204 21:08:48.973345   59256 system_pods.go:61] "kube-scheduler-pause-998149" [2eb295d4-abd3-4fd9-a972-f67b8887580b] Running
	I1204 21:08:48.973353   59256 system_pods.go:74] duration metric: took 8.149225ms to wait for pod list to return data ...
	I1204 21:08:48.973368   59256 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:08:48.976388   59256 default_sa.go:45] found service account: "default"
	I1204 21:08:48.976410   59256 default_sa.go:55] duration metric: took 3.03664ms for default service account to be created ...
	I1204 21:08:48.976418   59256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:08:49.027461   59256 system_pods.go:86] 7 kube-system pods found
	I1204 21:08:49.027495   59256 system_pods.go:89] "coredns-7c65d6cfc9-26bcn" [c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6] Running
	I1204 21:08:49.027502   59256 system_pods.go:89] "coredns-7c65d6cfc9-kfdvp" [935bb8b3-28ee-47d5-a525-b5bc7d882a63] Running
	I1204 21:08:49.027506   59256 system_pods.go:89] "etcd-pause-998149" [e836a3f5-4def-4413-a47c-d61eb18454e0] Running
	I1204 21:08:49.027512   59256 system_pods.go:89] "kube-apiserver-pause-998149" [51be41ea-85c5-4ef3-b980-d29c0ff81e50] Running
	I1204 21:08:49.027517   59256 system_pods.go:89] "kube-controller-manager-pause-998149" [10333131-5aab-4acb-8cf6-474a76909b71] Running
	I1204 21:08:49.027523   59256 system_pods.go:89] "kube-proxy-7pttk" [b9aa9037-580d-4c10-ba44-1e3925516a2a] Running
	I1204 21:08:49.027529   59256 system_pods.go:89] "kube-scheduler-pause-998149" [2eb295d4-abd3-4fd9-a972-f67b8887580b] Running
	I1204 21:08:49.027538   59256 system_pods.go:126] duration metric: took 51.113088ms to wait for k8s-apps to be running ...
	I1204 21:08:49.027550   59256 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:08:49.027606   59256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:49.042605   59256 system_svc.go:56] duration metric: took 15.043872ms WaitForService to wait for kubelet
	I1204 21:08:49.042644   59256 kubeadm.go:582] duration metric: took 9.349953022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:08:49.042670   59256 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:08:49.225365   59256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:08:49.225400   59256 node_conditions.go:123] node cpu capacity is 2
	I1204 21:08:49.225414   59256 node_conditions.go:105] duration metric: took 182.738924ms to run NodePressure ...
	I1204 21:08:49.225433   59256 start.go:241] waiting for startup goroutines ...
	I1204 21:08:49.225443   59256 start.go:246] waiting for cluster config update ...
	I1204 21:08:49.225454   59256 start.go:255] writing updated cluster config ...
	I1204 21:08:49.225788   59256 ssh_runner.go:195] Run: rm -f paused
	I1204 21:08:49.274311   59256 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:08:49.277409   59256 out.go:177] * Done! kubectl is now configured to use "pause-998149" cluster and "default" namespace by default
	W1204 21:08:49.287265   59256 root.go:91] failed to log command end to audit: failed to find a log row with id equals to ce586688-68db-401e-9e1f-efff798fec8c
	I1204 21:08:46.897623   71124 pod_ready.go:103] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"False"
	I1204 21:08:48.898747   71124 pod_ready.go:93] pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.898774   71124 pod_ready.go:82] duration metric: took 26.507316624s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.898787   71124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.903721   71124 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.903743   71124 pod_ready.go:82] duration metric: took 4.94778ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.903756   71124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.908430   71124 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.908458   71124 pod_ready.go:82] duration metric: took 4.694446ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.908473   71124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.912423   71124 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.912457   71124 pod_ready.go:82] duration metric: took 3.974182ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.912470   71124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.916313   71124 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:48.916334   71124 pod_ready.go:82] duration metric: took 3.857127ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:48.916345   71124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:49.295344   71124 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:08:49.295422   71124 pod_ready.go:82] duration metric: took 379.065775ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:08:49.295446   71124 pod_ready.go:39] duration metric: took 37.937080502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:08:49.295475   71124 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:08:49.295552   71124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:08:49.316403   71124 api_server.go:72] duration metric: took 39.213250399s to wait for apiserver process to appear ...
	I1204 21:08:49.316433   71124 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:08:49.316456   71124 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:08:49.320903   71124 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:08:49.322224   71124 api_server.go:141] control plane version: v1.31.2
	I1204 21:08:49.322250   71124 api_server.go:131] duration metric: took 5.80934ms to wait for apiserver health ...
	I1204 21:08:49.322260   71124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:08:49.500736   71124 system_pods.go:59] 7 kube-system pods found
	I1204 21:08:49.500779   71124 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running
	I1204 21:08:49.500788   71124 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running
	I1204 21:08:49.500795   71124 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running
	I1204 21:08:49.500802   71124 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running
	I1204 21:08:49.500814   71124 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:08:49.500820   71124 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running
	I1204 21:08:49.500826   71124 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:08:49.500833   71124 system_pods.go:74] duration metric: took 178.566601ms to wait for pod list to return data ...
	I1204 21:08:49.500848   71124 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:08:49.695896   71124 default_sa.go:45] found service account: "default"
	I1204 21:08:49.695924   71124 default_sa.go:55] duration metric: took 195.068467ms for default service account to be created ...
	I1204 21:08:49.695935   71124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:08:49.897789   71124 system_pods.go:86] 7 kube-system pods found
	I1204 21:08:49.897819   71124 system_pods.go:89] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running
	I1204 21:08:49.897824   71124 system_pods.go:89] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running
	I1204 21:08:49.897829   71124 system_pods.go:89] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running
	I1204 21:08:49.897832   71124 system_pods.go:89] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running
	I1204 21:08:49.897836   71124 system_pods.go:89] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:08:49.897840   71124 system_pods.go:89] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running
	I1204 21:08:49.897844   71124 system_pods.go:89] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:08:49.897850   71124 system_pods.go:126] duration metric: took 201.909039ms to wait for k8s-apps to be running ...
	I1204 21:08:49.897857   71124 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:08:49.897913   71124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:49.914406   71124 system_svc.go:56] duration metric: took 16.540815ms WaitForService to wait for kubelet
	I1204 21:08:49.914434   71124 kubeadm.go:582] duration metric: took 39.81128426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:08:49.914459   71124 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:08:50.095540   71124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:08:50.095572   71124 node_conditions.go:123] node cpu capacity is 2
	I1204 21:08:50.095587   71124 node_conditions.go:105] duration metric: took 181.122604ms to run NodePressure ...
	I1204 21:08:50.095601   71124 start.go:241] waiting for startup goroutines ...
	I1204 21:08:50.095611   71124 start.go:246] waiting for cluster config update ...
	I1204 21:08:50.095623   71124 start.go:255] writing updated cluster config ...
	I1204 21:08:50.095958   71124 ssh_runner.go:195] Run: rm -f paused
	I1204 21:08:50.148313   71124 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:08:50.150504   71124 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.861730694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346531861707522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ea3dd01-2330-4ecf-a680-4b681df85b1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.862781211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5341df3f-aad5-49ef-817f-668d62372f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.862857020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5341df3f-aad5-49ef-817f-668d62372f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.863047637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5341df3f-aad5-49ef-817f-668d62372f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.908863084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75d12a6b-57bc-4ef3-bbb2-b3e55a528e2f name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.908958606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75d12a6b-57bc-4ef3-bbb2-b3e55a528e2f name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.910329786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f81b7a96-d6cd-42a9-b5db-f74b93f796b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.910874558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346531910841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f81b7a96-d6cd-42a9-b5db-f74b93f796b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.911700408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96c9d049-eafe-462b-ae11-9e52a68cae75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.911812450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96c9d049-eafe-462b-ae11-9e52a68cae75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.912101841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96c9d049-eafe-462b-ae11-9e52a68cae75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.951106942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb6be69f-dc08-4afc-a52f-c071fc6bcad1 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.951250020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb6be69f-dc08-4afc-a52f-c071fc6bcad1 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.952708360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3b2e0b9-f969-4551-ba76-6f9ad9ad9bd7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.953174270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346531953145068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3b2e0b9-f969-4551-ba76-6f9ad9ad9bd7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.953755740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50b56794-bc01-4539-b197-00f02a67c8e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.953825326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50b56794-bc01-4539-b197-00f02a67c8e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.954077172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50b56794-bc01-4539-b197-00f02a67c8e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.992320889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47c30952-c365-48d1-8857-fcd59aaef306 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.992397513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47c30952-c365-48d1-8857-fcd59aaef306 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.995056311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0468a3ec-541b-4f28-80d4-f15a1027b147 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.995521574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346531995489425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0468a3ec-541b-4f28-80d4-f15a1027b147 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.997249340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b2d7013-4fa4-4793-8801-48abe83ebdbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.997306231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b2d7013-4fa4-4793-8801-48abe83ebdbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:08:51 pause-998149 crio[2728]: time="2024-12-04 21:08:51.997479619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab,PodSandboxId:b698035ef5a94c25406609f3a6952c08ebf31a158f08a0cd9d9279d08576204e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733346521593738135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pttk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9aa9037-580d-4c10-ba44-1e3925516a2a,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e,PodSandboxId:24ba88c8cf40cdedfa7df0ef11e747823b1fc5d80303ae4928e48d1a31fb7357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521115530306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-26bcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2,PodSandboxId:23cc6ff77c824e2e2adb9f4faee61ee1fecd0218d9e6621f66dc6e745341d90a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733346521011282642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kfdvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 935bb8b3-28ee-47d5-a525-b5b
c7d882a63,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575,PodSandboxId:fcbf293fe00ee41e6c08819272fcbc02d5e91bc68fc0444303b28f12fc18c861,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346509712968162,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5cadafa3999482c9c56cd0d243530b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7,PodSandboxId:61aa6a222b05f43e34965821b57c11a5a855ce2495f6bbc1b3223df7d248cf7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346509713854546,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497,PodSandboxId:990ab76226072515366431e87ab70657c8b5bd44f4e8fbd8b5b3776ad4a74e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346509676176448,Labels:map[string]string{io.
kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7420b9e88cd9b0b347ca4677458e1eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442,PodSandboxId:b85dcb73b65939f0d297f9bc02b07b3b9a58180b73507dbb2510cf1ad66d3cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346509650309427,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d8c9424ffc9fb9d2107a63eadf4f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723,PodSandboxId:955e5ffba3992e686e8c4b005cfb9a0fbcd1b7f3a41328f34caa27fed4e51c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733346237116731688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-pause-998149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92270b16e848466f547773d21b0a2052,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b2d7013-4fa4-4793-8801-48abe83ebdbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fc6609ca2776       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   10 seconds ago      Running             kube-proxy                0                   b698035ef5a94       kube-proxy-7pttk
	e112e39dfc622       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   0                   24ba88c8cf40c       coredns-7c65d6cfc9-26bcn
	205acfde19589       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   0                   23cc6ff77c824       coredns-7c65d6cfc9-kfdvp
	6c9becd8f0cc4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 seconds ago      Running             kube-apiserver            3                   61aa6a222b05f       kube-apiserver-pause-998149
	bb28c2641a67b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   22 seconds ago      Running             kube-scheduler            3                   fcbf293fe00ee       kube-scheduler-pause-998149
	46cc096ccdfdb       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   22 seconds ago      Running             kube-controller-manager   3                   990ab76226072       kube-controller-manager-pause-998149
	53022095ca4eb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Running             etcd                      3                   b85dcb73b6593       etcd-pause-998149
	ebbf0704fb917       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   4 minutes ago       Exited              kube-apiserver            2                   955e5ffba3992       kube-apiserver-pause-998149
	
	
	==> coredns [205acfde19589c730fdee388bd6aa489f23d896e3e1e055f59bb7417dfadbcd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e112e39dfc622e08869aaa2756f0b607c3333955840fd4092aea1f6c1007a84e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               pause-998149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-998149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=pause-998149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_08_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-998149
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:08:45 +0000   Wed, 04 Dec 2024 21:08:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.167
	  Hostname:    pause-998149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f85cb28c4cff47e68afdc5807112b2db
	  System UUID:                f85cb28c-4cff-47e6-8afd-c5807112b2db
	  Boot ID:                    4186bd33-7707-490f-85b7-576317de36f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-26bcn                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12s
	  kube-system                 coredns-7c65d6cfc9-kfdvp                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12s
	  kube-system                 etcd-pause-998149                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         18s
	  kube-system                 kube-apiserver-pause-998149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-controller-manager-pause-998149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-proxy-7pttk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-scheduler-pause-998149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (12%)  340Mi (17%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10s   kube-proxy       
	  Normal  Starting                 18s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18s   kubelet          Node pause-998149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s   kubelet          Node pause-998149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s   kubelet          Node pause-998149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s   node-controller  Node pause-998149 event: Registered Node pause-998149 in Controller
	
	
	==> dmesg <==
	[  +0.105855] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.243118] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.889594] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.821156] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.059696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.973866] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.074340] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.786369] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	[  +0.793522] kauditd_printk_skb: 43 callbacks suppressed
	[Dec 4 21:02] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.996985] systemd-fstab-generator[2461]: Ignoring "noauto" option for root device
	[  +0.220561] systemd-fstab-generator[2473]: Ignoring "noauto" option for root device
	[  +0.284199] systemd-fstab-generator[2524]: Ignoring "noauto" option for root device
	[  +0.197846] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +0.473800] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[Dec 4 21:03] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.097288] kauditd_printk_skb: 174 callbacks suppressed
	[  +2.195757] systemd-fstab-generator[2965]: Ignoring "noauto" option for root device
	[Dec 4 21:04] kauditd_printk_skb: 84 callbacks suppressed
	[ +54.873498] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 4 21:08] systemd-fstab-generator[4052]: Ignoring "noauto" option for root device
	[  +6.051388] systemd-fstab-generator[4378]: Ignoring "noauto" option for root device
	[  +0.087613] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.218614] systemd-fstab-generator[4488]: Ignoring "noauto" option for root device
	[  +0.094707] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [53022095ca4eb2fe9e14b6a57600ca675b121bfcb657b57891a14209084ad442] <==
	{"level":"info","ts":"2024-12-04T21:08:29.952851Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T21:08:29.953526Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7664cfa1ff0dacf1","initial-advertise-peer-urls":["https://192.168.50.167:2380"],"listen-peer-urls":["https://192.168.50.167:2380"],"advertise-client-urls":["https://192.168.50.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T21:08:29.953584Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T21:08:29.953710Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-12-04T21:08:29.953739Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-12-04T21:08:30.591242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgPreVoteResp from 7664cfa1ff0dacf1 at term 1"}
	{"level":"info","ts":"2024-12-04T21:08:30.591421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgVoteResp from 7664cfa1ff0dacf1 at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.591497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7664cfa1ff0dacf1 elected leader 7664cfa1ff0dacf1 at term 2"}
	{"level":"info","ts":"2024-12-04T21:08:30.595403Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7664cfa1ff0dacf1","local-member-attributes":"{Name:pause-998149 ClientURLs:[https://192.168.50.167:2379]}","request-path":"/0/members/7664cfa1ff0dacf1/attributes","cluster-id":"d7e89ab1d6ffbfaa","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T21:08:30.595520Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:08:30.595566Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.599220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:08:30.599255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:08:30.595591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:08:30.601443Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d7e89ab1d6ffbfaa","local-member-id":"7664cfa1ff0dacf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601563Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601608Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:08:30.601914Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:08:30.604389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:08:30.605080Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.167:2379"}
	{"level":"info","ts":"2024-12-04T21:08:30.607746Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:08:52 up 7 min,  0 users,  load average: 1.31, 0.45, 0.19
	Linux pause-998149 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c9becd8f0cc465da0169f24117ca9b03d91d8a4e9e2746053330520e8a1d0d7] <==
	I1204 21:08:32.377326       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1204 21:08:32.377819       1 controller.go:615] quota admission added evaluator for: namespaces
	E1204 21:08:32.381742       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1204 21:08:32.387208       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1204 21:08:32.387305       1 aggregator.go:171] initial CRD sync complete...
	I1204 21:08:32.387315       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 21:08:32.387329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 21:08:32.387336       1 cache.go:39] Caches are synced for autoregister controller
	I1204 21:08:32.399753       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 21:08:32.586119       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 21:08:33.183243       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1204 21:08:33.189114       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1204 21:08:33.189125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 21:08:33.739299       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 21:08:33.793371       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 21:08:33.886646       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1204 21:08:33.893690       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.167]
	I1204 21:08:33.894716       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 21:08:33.900547       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 21:08:34.281791       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 21:08:34.776704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 21:08:34.789333       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 21:08:34.800622       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 21:08:39.661819       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 21:08:39.789132       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [ebbf0704fb917629adae45d88b595b95bada24ec4c33263876e5738c9467a723] <==
	I1204 21:08:16.695805       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1204 21:08:16.695868       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1204 21:08:16.695889       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I1204 21:08:16.695917       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1204 21:08:16.695965       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1204 21:08:16.696038       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1204 21:08:16.696066       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I1204 21:08:16.696357       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1204 21:08:16.696611       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1204 21:08:16.696864       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1204 21:08:16.696984       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1204 21:08:16.697030       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1204 21:08:16.697278       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1204 21:08:16.697397       1 secure_serving.go:258] Stopped listening on [::]:8443
	I1204 21:08:16.697430       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1204 21:08:16.695409       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1204 21:08:16.697788       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1204 21:08:16.695453       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1204 21:08:16.698074       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1204 21:08:16.698531       1 controller.go:157] Shutting down quota evaluator
	I1204 21:08:16.698670       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699289       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699517       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699511       1 controller.go:176] quota evaluator worker shutdown
	I1204 21:08:16.699664       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [46cc096ccdfdb089a2f1623d27802b3d52290bff0c58ee639e7bbfeae9795497] <==
	I1204 21:08:39.040598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	I1204 21:08:39.095520       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1204 21:08:39.133398       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1204 21:08:39.151405       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 21:08:39.178835       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1204 21:08:39.231447       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1204 21:08:39.235165       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 21:08:39.281139       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1204 21:08:39.281276       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1204 21:08:39.281375       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1204 21:08:39.281289       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1204 21:08:39.661129       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 21:08:39.720228       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 21:08:39.720354       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 21:08:39.880217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	I1204 21:08:40.175255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="366.553866ms"
	I1204 21:08:40.221443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.99712ms"
	I1204 21:08:40.221544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="50.55µs"
	I1204 21:08:41.809848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.025µs"
	I1204 21:08:41.856955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.947µs"
	I1204 21:08:43.189717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.235724ms"
	I1204 21:08:43.190874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.997µs"
	I1204 21:08:44.340041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.885547ms"
	I1204 21:08:44.340105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.056µs"
	I1204 21:08:45.273879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-998149"
	
	
	==> kube-proxy [3fc6609ca27763570f8ad2da7d746c3d800ac2515b1be31fd14eb994bd3b65ab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:08:41.800468       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:08:41.831402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.167"]
	E1204 21:08:41.831615       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:08:41.893545       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:08:41.893603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:08:41.893649       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:08:41.896042       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:08:41.896432       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:08:41.896454       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:08:41.897946       1 config.go:199] "Starting service config controller"
	I1204 21:08:41.898001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:08:41.898063       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:08:41.898079       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:08:41.900456       1 config.go:328] "Starting node config controller"
	I1204 21:08:41.900597       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:08:41.998796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:08:41.998928       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:08:42.000678       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bb28c2641a67b3f413988577a36458abd1e8bf77142c0c9fc2364e8db36c6575] <==
	E1204 21:08:32.329546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.327918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:32.329597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.327943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 21:08:32.329650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:32.328011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:08:32.329702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E1204 21:08:32.329022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.168677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.168713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.200983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.201084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.283491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:08:33.283741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.336277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 21:08:33.336445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.373267       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:08:33.373418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.415390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:08:33.415539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.460437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 21:08:33.460530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:08:33.604498       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:08:33.604878       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:08:35.314218       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:08:35 pause-998149 kubelet[4385]: I1204 21:08:35.843718    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-998149" podStartSLOduration=1.843700901 podStartE2EDuration="1.843700901s" podCreationTimestamp="2024-12-04 21:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:35.829562068 +0000 UTC m=+1.278621107" watchObservedRunningTime="2024-12-04 21:08:35.843700901 +0000 UTC m=+1.292759940"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: W1204 21:08:39.716596    4385 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-998149" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-998149' and this object
	Dec 04 21:08:39 pause-998149 kubelet[4385]: E1204 21:08:39.716648    4385 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-998149\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-998149' and this object" logger="UnhandledError"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: W1204 21:08:39.716717    4385 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-998149" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-998149' and this object
	Dec 04 21:08:39 pause-998149 kubelet[4385]: E1204 21:08:39.716727    4385 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-998149\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-998149' and this object" logger="UnhandledError"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725038    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2g4\" (UniqueName: \"kubernetes.io/projected/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-api-access-9t2g4\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725083    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9aa9037-580d-4c10-ba44-1e3925516a2a-xtables-lock\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725100    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:39 pause-998149 kubelet[4385]: I1204 21:08:39.725120    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9aa9037-580d-4c10-ba44-1e3925516a2a-lib-modules\") pod \"kube-proxy-7pttk\" (UID: \"b9aa9037-580d-4c10-ba44-1e3925516a2a\") " pod="kube-system/kube-proxy-7pttk"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.227821    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/935bb8b3-28ee-47d5-a525-b5bc7d882a63-config-volume\") pod \"coredns-7c65d6cfc9-kfdvp\" (UID: \"935bb8b3-28ee-47d5-a525-b5bc7d882a63\") " pod="kube-system/coredns-7c65d6cfc9-kfdvp"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.227967    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6-config-volume\") pod \"coredns-7c65d6cfc9-26bcn\" (UID: \"c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6\") " pod="kube-system/coredns-7c65d6cfc9-26bcn"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.228047    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2xg\" (UniqueName: \"kubernetes.io/projected/935bb8b3-28ee-47d5-a525-b5bc7d882a63-kube-api-access-7d2xg\") pod \"coredns-7c65d6cfc9-kfdvp\" (UID: \"935bb8b3-28ee-47d5-a525-b5bc7d882a63\") " pod="kube-system/coredns-7c65d6cfc9-kfdvp"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.228092    4385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vlv\" (UniqueName: \"kubernetes.io/projected/c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6-kube-api-access-w8vlv\") pod \"coredns-7c65d6cfc9-26bcn\" (UID: \"c5953763-c59b-4ca3-9c1f-fc0dbfb8d3f6\") " pod="kube-system/coredns-7c65d6cfc9-26bcn"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: I1204 21:08:40.692088    4385 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 04 21:08:40 pause-998149 kubelet[4385]: E1204 21:08:40.827302    4385 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 04 21:08:40 pause-998149 kubelet[4385]: E1204 21:08:40.827484    4385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy podName:b9aa9037-580d-4c10-ba44-1e3925516a2a nodeName:}" failed. No retries permitted until 2024-12-04 21:08:41.327446969 +0000 UTC m=+6.776505992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b9aa9037-580d-4c10-ba44-1e3925516a2a-kube-proxy") pod "kube-proxy-7pttk" (UID: "b9aa9037-580d-4c10-ba44-1e3925516a2a") : failed to sync configmap cache: timed out waiting for the condition
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.839244    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pttk" podStartSLOduration=2.839218391 podStartE2EDuration="2.839218391s" podCreationTimestamp="2024-12-04 21:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.837170731 +0000 UTC m=+7.286229771" watchObservedRunningTime="2024-12-04 21:08:41.839218391 +0000 UTC m=+7.288277430"
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.839358    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kfdvp" podStartSLOduration=1.8393524270000001 podStartE2EDuration="1.839352427s" podCreationTimestamp="2024-12-04 21:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.812432298 +0000 UTC m=+7.261491337" watchObservedRunningTime="2024-12-04 21:08:41.839352427 +0000 UTC m=+7.288411461"
	Dec 04 21:08:41 pause-998149 kubelet[4385]: I1204 21:08:41.856356    4385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-26bcn" podStartSLOduration=1.856336907 podStartE2EDuration="1.856336907s" podCreationTimestamp="2024-12-04 21:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-04 21:08:41.856092846 +0000 UTC m=+7.305151889" watchObservedRunningTime="2024-12-04 21:08:41.856336907 +0000 UTC m=+7.305395946"
	Dec 04 21:08:43 pause-998149 kubelet[4385]: I1204 21:08:43.152737    4385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: I1204 21:08:44.312386    4385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: E1204 21:08:44.753825    4385 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346524753508033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:08:44 pause-998149 kubelet[4385]: E1204 21:08:44.753851    4385 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733346524753508033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:08:45 pause-998149 kubelet[4385]: I1204 21:08:45.250169    4385 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 04 21:08:45 pause-998149 kubelet[4385]: I1204 21:08:45.251448    4385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-998149 -n pause-998149
helpers_test.go:261: (dbg) Run:  kubectl --context pause-998149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (421.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m33.620932787s)

                                                
                                                
-- stdout --
	* [old-k8s-version-082859] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-082859" primary control-plane node in "old-k8s-version-082859" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:06:15.148418   69222 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:06:15.148559   69222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:06:15.148571   69222 out.go:358] Setting ErrFile to fd 2...
	I1204 21:06:15.148577   69222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:06:15.148765   69222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:06:15.149389   69222 out.go:352] Setting JSON to false
	I1204 21:06:15.150499   69222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6525,"bootTime":1733339850,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:06:15.150605   69222 start.go:139] virtualization: kvm guest
	I1204 21:06:15.152849   69222 out.go:177] * [old-k8s-version-082859] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:06:15.154173   69222 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:06:15.154183   69222 notify.go:220] Checking for updates...
	I1204 21:06:15.156486   69222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:06:15.157712   69222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:06:15.158930   69222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:06:15.160033   69222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:06:15.161154   69222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:06:15.162802   69222 config.go:182] Loaded profile config "bridge-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:06:15.162934   69222 config.go:182] Loaded profile config "calico-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:06:15.163122   69222 config.go:182] Loaded profile config "pause-998149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:06:15.163244   69222 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:06:15.208656   69222 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:06:15.209840   69222 start.go:297] selected driver: kvm2
	I1204 21:06:15.209933   69222 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:06:15.209967   69222 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:06:15.211055   69222 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:06:15.211169   69222 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:06:15.229735   69222 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:06:15.229794   69222 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 21:06:15.230031   69222 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:06:15.230063   69222 cni.go:84] Creating CNI manager for ""
	I1204 21:06:15.230107   69222 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:06:15.230115   69222 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 21:06:15.230177   69222 start.go:340] cluster config:
	{Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:06:15.230289   69222 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:06:15.232062   69222 out.go:177] * Starting "old-k8s-version-082859" primary control-plane node in "old-k8s-version-082859" cluster
	I1204 21:06:15.233389   69222 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:06:15.233438   69222 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 21:06:15.233447   69222 cache.go:56] Caching tarball of preloaded images
	I1204 21:06:15.233526   69222 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:06:15.233536   69222 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1204 21:06:15.233620   69222 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:06:15.233637   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json: {Name:mkf98d4e8a490a5cb8c752599a7eba2dd35a3b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:15.233831   69222 start.go:360] acquireMachinesLock for old-k8s-version-082859: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:06:16.845606   69222 start.go:364] duration metric: took 1.611745792s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:06:16.845693   69222 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:06:16.845793   69222 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 21:06:16.847746   69222 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 21:06:16.847909   69222 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:06:16.847970   69222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:06:16.868511   69222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I1204 21:06:16.868978   69222 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:06:16.869568   69222 main.go:141] libmachine: Using API Version  1
	I1204 21:06:16.869588   69222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:06:16.869928   69222 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:06:16.870112   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:06:16.870265   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:16.870420   69222 start.go:159] libmachine.API.Create for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:06:16.870451   69222 client.go:168] LocalClient.Create starting
	I1204 21:06:16.870498   69222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 21:06:16.870537   69222 main.go:141] libmachine: Decoding PEM data...
	I1204 21:06:16.870553   69222 main.go:141] libmachine: Parsing certificate...
	I1204 21:06:16.870601   69222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 21:06:16.870619   69222 main.go:141] libmachine: Decoding PEM data...
	I1204 21:06:16.870629   69222 main.go:141] libmachine: Parsing certificate...
	I1204 21:06:16.870642   69222 main.go:141] libmachine: Running pre-create checks...
	I1204 21:06:16.870651   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .PreCreateCheck
	I1204 21:06:16.870993   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:06:16.871408   69222 main.go:141] libmachine: Creating machine...
	I1204 21:06:16.871424   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .Create
	I1204 21:06:16.871564   69222 main.go:141] libmachine: (old-k8s-version-082859) Creating KVM machine...
	I1204 21:06:16.872705   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found existing default KVM network
	I1204 21:06:16.874407   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:16.874247   69244 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:b7:f0} reservation:<nil>}
	I1204 21:06:16.875145   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:16.875051   69244 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f4:e1:8e} reservation:<nil>}
	I1204 21:06:16.875978   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:16.875905   69244 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:f3:91} reservation:<nil>}
	I1204 21:06:16.877300   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:16.877212   69244 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I1204 21:06:16.877327   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | created network xml: 
	I1204 21:06:16.877339   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | <network>
	I1204 21:06:16.877345   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   <name>mk-old-k8s-version-082859</name>
	I1204 21:06:16.877353   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   <dns enable='no'/>
	I1204 21:06:16.877360   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   
	I1204 21:06:16.877382   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1204 21:06:16.877391   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |     <dhcp>
	I1204 21:06:16.877455   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1204 21:06:16.877476   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |     </dhcp>
	I1204 21:06:16.877486   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   </ip>
	I1204 21:06:16.877496   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG |   
	I1204 21:06:16.877505   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | </network>
	I1204 21:06:16.877515   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | 
	I1204 21:06:16.882610   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | trying to create private KVM network mk-old-k8s-version-082859 192.168.72.0/24...
	I1204 21:06:16.959802   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | private KVM network mk-old-k8s-version-082859 192.168.72.0/24 created
	I1204 21:06:16.959836   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859 ...
	I1204 21:06:16.959849   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:16.959780   69244 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:06:16.959870   69222 main.go:141] libmachine: (old-k8s-version-082859) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 21:06:16.959938   69222 main.go:141] libmachine: (old-k8s-version-082859) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 21:06:17.213983   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:17.213853   69244 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa...
	I1204 21:06:17.394366   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:17.394177   69244 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/old-k8s-version-082859.rawdisk...
	I1204 21:06:17.394415   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Writing magic tar header
	I1204 21:06:17.394436   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Writing SSH key tar header
	I1204 21:06:17.394451   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:17.394340   69244 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859 ...
	I1204 21:06:17.394511   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859
	I1204 21:06:17.394537   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859 (perms=drwx------)
	I1204 21:06:17.394561   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 21:06:17.394569   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 21:06:17.394580   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 21:06:17.394603   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 21:06:17.394620   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 21:06:17.394630   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:06:17.394644   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 21:06:17.394651   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 21:06:17.394656   69222 main.go:141] libmachine: (old-k8s-version-082859) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 21:06:17.394665   69222 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:06:17.394671   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home/jenkins
	I1204 21:06:17.394676   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Checking permissions on dir: /home
	I1204 21:06:17.394685   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Skipping /home - not owner
	I1204 21:06:17.396035   69222 main.go:141] libmachine: (old-k8s-version-082859) define libvirt domain using xml: 
	I1204 21:06:17.396058   69222 main.go:141] libmachine: (old-k8s-version-082859) <domain type='kvm'>
	I1204 21:06:17.396070   69222 main.go:141] libmachine: (old-k8s-version-082859)   <name>old-k8s-version-082859</name>
	I1204 21:06:17.396079   69222 main.go:141] libmachine: (old-k8s-version-082859)   <memory unit='MiB'>2200</memory>
	I1204 21:06:17.396088   69222 main.go:141] libmachine: (old-k8s-version-082859)   <vcpu>2</vcpu>
	I1204 21:06:17.396093   69222 main.go:141] libmachine: (old-k8s-version-082859)   <features>
	I1204 21:06:17.396110   69222 main.go:141] libmachine: (old-k8s-version-082859)     <acpi/>
	I1204 21:06:17.396127   69222 main.go:141] libmachine: (old-k8s-version-082859)     <apic/>
	I1204 21:06:17.396138   69222 main.go:141] libmachine: (old-k8s-version-082859)     <pae/>
	I1204 21:06:17.396145   69222 main.go:141] libmachine: (old-k8s-version-082859)     
	I1204 21:06:17.396153   69222 main.go:141] libmachine: (old-k8s-version-082859)   </features>
	I1204 21:06:17.396164   69222 main.go:141] libmachine: (old-k8s-version-082859)   <cpu mode='host-passthrough'>
	I1204 21:06:17.396176   69222 main.go:141] libmachine: (old-k8s-version-082859)   
	I1204 21:06:17.396188   69222 main.go:141] libmachine: (old-k8s-version-082859)   </cpu>
	I1204 21:06:17.396207   69222 main.go:141] libmachine: (old-k8s-version-082859)   <os>
	I1204 21:06:17.396218   69222 main.go:141] libmachine: (old-k8s-version-082859)     <type>hvm</type>
	I1204 21:06:17.396229   69222 main.go:141] libmachine: (old-k8s-version-082859)     <boot dev='cdrom'/>
	I1204 21:06:17.396240   69222 main.go:141] libmachine: (old-k8s-version-082859)     <boot dev='hd'/>
	I1204 21:06:17.396266   69222 main.go:141] libmachine: (old-k8s-version-082859)     <bootmenu enable='no'/>
	I1204 21:06:17.396288   69222 main.go:141] libmachine: (old-k8s-version-082859)   </os>
	I1204 21:06:17.396311   69222 main.go:141] libmachine: (old-k8s-version-082859)   <devices>
	I1204 21:06:17.396324   69222 main.go:141] libmachine: (old-k8s-version-082859)     <disk type='file' device='cdrom'>
	I1204 21:06:17.396343   69222 main.go:141] libmachine: (old-k8s-version-082859)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/boot2docker.iso'/>
	I1204 21:06:17.396361   69222 main.go:141] libmachine: (old-k8s-version-082859)       <target dev='hdc' bus='scsi'/>
	I1204 21:06:17.396375   69222 main.go:141] libmachine: (old-k8s-version-082859)       <readonly/>
	I1204 21:06:17.396387   69222 main.go:141] libmachine: (old-k8s-version-082859)     </disk>
	I1204 21:06:17.396409   69222 main.go:141] libmachine: (old-k8s-version-082859)     <disk type='file' device='disk'>
	I1204 21:06:17.396433   69222 main.go:141] libmachine: (old-k8s-version-082859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 21:06:17.396459   69222 main.go:141] libmachine: (old-k8s-version-082859)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/old-k8s-version-082859.rawdisk'/>
	I1204 21:06:17.396471   69222 main.go:141] libmachine: (old-k8s-version-082859)       <target dev='hda' bus='virtio'/>
	I1204 21:06:17.396482   69222 main.go:141] libmachine: (old-k8s-version-082859)     </disk>
	I1204 21:06:17.396498   69222 main.go:141] libmachine: (old-k8s-version-082859)     <interface type='network'>
	I1204 21:06:17.396511   69222 main.go:141] libmachine: (old-k8s-version-082859)       <source network='mk-old-k8s-version-082859'/>
	I1204 21:06:17.396521   69222 main.go:141] libmachine: (old-k8s-version-082859)       <model type='virtio'/>
	I1204 21:06:17.396532   69222 main.go:141] libmachine: (old-k8s-version-082859)     </interface>
	I1204 21:06:17.396549   69222 main.go:141] libmachine: (old-k8s-version-082859)     <interface type='network'>
	I1204 21:06:17.396576   69222 main.go:141] libmachine: (old-k8s-version-082859)       <source network='default'/>
	I1204 21:06:17.396596   69222 main.go:141] libmachine: (old-k8s-version-082859)       <model type='virtio'/>
	I1204 21:06:17.396611   69222 main.go:141] libmachine: (old-k8s-version-082859)     </interface>
	I1204 21:06:17.396624   69222 main.go:141] libmachine: (old-k8s-version-082859)     <serial type='pty'>
	I1204 21:06:17.396638   69222 main.go:141] libmachine: (old-k8s-version-082859)       <target port='0'/>
	I1204 21:06:17.396650   69222 main.go:141] libmachine: (old-k8s-version-082859)     </serial>
	I1204 21:06:17.396663   69222 main.go:141] libmachine: (old-k8s-version-082859)     <console type='pty'>
	I1204 21:06:17.396681   69222 main.go:141] libmachine: (old-k8s-version-082859)       <target type='serial' port='0'/>
	I1204 21:06:17.396694   69222 main.go:141] libmachine: (old-k8s-version-082859)     </console>
	I1204 21:06:17.396708   69222 main.go:141] libmachine: (old-k8s-version-082859)     <rng model='virtio'>
	I1204 21:06:17.396724   69222 main.go:141] libmachine: (old-k8s-version-082859)       <backend model='random'>/dev/random</backend>
	I1204 21:06:17.396736   69222 main.go:141] libmachine: (old-k8s-version-082859)     </rng>
	I1204 21:06:17.396810   69222 main.go:141] libmachine: (old-k8s-version-082859)     
	I1204 21:06:17.396834   69222 main.go:141] libmachine: (old-k8s-version-082859)     
	I1204 21:06:17.396850   69222 main.go:141] libmachine: (old-k8s-version-082859)   </devices>
	I1204 21:06:17.396860   69222 main.go:141] libmachine: (old-k8s-version-082859) </domain>
	I1204 21:06:17.396871   69222 main.go:141] libmachine: (old-k8s-version-082859) 
	I1204 21:06:17.401155   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:c8:1f:80 in network default
	I1204 21:06:17.401705   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:17.401738   69222 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:06:17.402482   69222 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:06:17.402981   69222 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:06:17.403611   69222 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:06:17.404483   69222 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:06:18.914420   69222 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:06:18.915608   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:18.916160   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:18.916194   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:18.916133   69244 retry.go:31] will retry after 280.358891ms: waiting for machine to come up
	I1204 21:06:19.198693   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:19.199257   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:19.199284   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:19.199230   69244 retry.go:31] will retry after 244.553252ms: waiting for machine to come up
	I1204 21:06:19.445738   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:19.447205   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:19.447232   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:19.447159   69244 retry.go:31] will retry after 314.85971ms: waiting for machine to come up
	I1204 21:06:19.763835   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:19.764523   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:19.764552   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:19.764442   69244 retry.go:31] will retry after 470.223585ms: waiting for machine to come up
	I1204 21:06:20.236143   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:20.236905   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:20.236922   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:20.236839   69244 retry.go:31] will retry after 617.733797ms: waiting for machine to come up
	I1204 21:06:20.856871   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:20.857450   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:20.857494   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:20.857378   69244 retry.go:31] will retry after 764.086367ms: waiting for machine to come up
	I1204 21:06:21.623003   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:21.623642   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:21.623666   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:21.623596   69244 retry.go:31] will retry after 726.997451ms: waiting for machine to come up
	I1204 21:06:22.351966   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:22.352479   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:22.352513   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:22.352441   69244 retry.go:31] will retry after 1.024580161s: waiting for machine to come up
	I1204 21:06:23.378606   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:23.379090   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:23.379154   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:23.379065   69244 retry.go:31] will retry after 1.636158565s: waiting for machine to come up
	I1204 21:06:25.016568   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:25.017101   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:25.017130   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:25.017050   69244 retry.go:31] will retry after 2.110180501s: waiting for machine to come up
	I1204 21:06:27.128616   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:27.129257   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:27.129285   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:27.129164   69244 retry.go:31] will retry after 2.194085767s: waiting for machine to come up
	I1204 21:06:29.324612   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:29.325225   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:29.325247   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:29.325180   69244 retry.go:31] will retry after 3.572171087s: waiting for machine to come up
	I1204 21:06:32.898910   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:32.899594   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:32.899618   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:32.899554   69244 retry.go:31] will retry after 3.295391183s: waiting for machine to come up
	I1204 21:06:36.198961   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:36.199455   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:06:36.199483   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:06:36.199408   69244 retry.go:31] will retry after 5.627437002s: waiting for machine to come up
	I1204 21:06:41.828634   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:41.829226   69222 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:06:41.829246   69222 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:06:41.829255   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:41.829607   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859
	I1204 21:06:41.996227   69222 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:06:41.996255   69222 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:06:41.996264   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:06:42.000249   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.000754   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.000788   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.001084   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:06:42.001105   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:06:42.001133   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:06:42.001146   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:06:42.001172   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:06:42.136045   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:06:42.136282   69222 main.go:141] libmachine: (old-k8s-version-082859) KVM machine creation complete!
	I1204 21:06:42.136687   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:06:42.137222   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:42.137450   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:42.137617   69222 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 21:06:42.137631   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:06:42.138822   69222 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 21:06:42.138879   69222 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 21:06:42.138893   69222 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 21:06:42.138908   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.141669   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.142184   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.142212   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.142358   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.142528   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.142701   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.142871   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.143032   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:42.143240   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:42.143253   69222 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 21:06:42.250749   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:06:42.250777   69222 main.go:141] libmachine: Detecting the provisioner...
	I1204 21:06:42.250788   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.254239   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.254678   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.254711   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.255026   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.255272   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.255530   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.255726   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.255929   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:42.256157   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:42.256173   69222 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 21:06:42.360132   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 21:06:42.360214   69222 main.go:141] libmachine: found compatible host: buildroot
	I1204 21:06:42.360227   69222 main.go:141] libmachine: Provisioning with buildroot...
	I1204 21:06:42.360234   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:06:42.360494   69222 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:06:42.360521   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:06:42.360720   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.363426   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.363829   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.363859   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.363963   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.364163   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.364327   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.364511   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.364707   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:42.364928   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:42.364947   69222 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:06:42.488820   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:06:42.488858   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.491759   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.492102   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.492132   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.492301   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.492535   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.492754   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.492952   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.493177   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:42.493398   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:42.493432   69222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:06:42.611396   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:06:42.611426   69222 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:06:42.611444   69222 buildroot.go:174] setting up certificates
	I1204 21:06:42.611453   69222 provision.go:84] configureAuth start
	I1204 21:06:42.611462   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:06:42.611726   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:06:42.614499   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.614940   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.614975   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.615156   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.617366   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.617672   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.617692   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.617818   69222 provision.go:143] copyHostCerts
	I1204 21:06:42.617870   69222 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:06:42.617879   69222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:06:42.617925   69222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:06:42.618015   69222 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:06:42.618023   69222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:06:42.618043   69222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:06:42.618110   69222 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:06:42.618119   69222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:06:42.618136   69222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:06:42.618190   69222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:06:42.743202   69222 provision.go:177] copyRemoteCerts
	I1204 21:06:42.743285   69222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:06:42.743312   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.746369   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.746784   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.746822   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.746971   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.747168   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.747340   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.747506   69222 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:06:42.834490   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:06:42.857086   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:06:42.879329   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:06:42.903258   69222 provision.go:87] duration metric: took 291.791177ms to configureAuth
	I1204 21:06:42.903286   69222 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:06:42.903512   69222 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:06:42.903602   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:42.906848   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.907268   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:42.907297   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:42.907576   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:42.907766   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.907952   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:42.908118   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:42.908355   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:42.908555   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:42.908578   69222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:06:43.130860   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:06:43.130896   69222 main.go:141] libmachine: Checking connection to Docker...
	I1204 21:06:43.130908   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetURL
	I1204 21:06:43.132324   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using libvirt version 6000000
	I1204 21:06:43.134625   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.135077   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.135125   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.135345   69222 main.go:141] libmachine: Docker is up and running!
	I1204 21:06:43.135361   69222 main.go:141] libmachine: Reticulating splines...
	I1204 21:06:43.135367   69222 client.go:171] duration metric: took 26.264905193s to LocalClient.Create
	I1204 21:06:43.135423   69222 start.go:167] duration metric: took 26.265003639s to libmachine.API.Create "old-k8s-version-082859"
	I1204 21:06:43.135438   69222 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:06:43.135455   69222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:06:43.135477   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:43.135729   69222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:06:43.135760   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:43.138232   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.138525   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.138548   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.138673   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:43.138850   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:43.139043   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:43.139203   69222 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:06:43.222488   69222 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:06:43.226308   69222 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:06:43.226339   69222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:06:43.226416   69222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:06:43.226558   69222 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:06:43.226727   69222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:06:43.236722   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:06:43.259550   69222 start.go:296] duration metric: took 124.097676ms for postStartSetup
	I1204 21:06:43.259592   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:06:43.260246   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:06:43.262884   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.263230   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.263270   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.263503   69222 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:06:43.263685   69222 start.go:128] duration metric: took 26.417879297s to createHost
	I1204 21:06:43.263712   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:43.265967   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.266354   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.266386   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.266509   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:43.266692   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:43.266842   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:43.267001   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:43.267149   69222 main.go:141] libmachine: Using SSH client type: native
	I1204 21:06:43.267309   69222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:06:43.267321   69222 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:06:43.367654   69222 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346403.339057365
	
	I1204 21:06:43.367678   69222 fix.go:216] guest clock: 1733346403.339057365
	I1204 21:06:43.367688   69222 fix.go:229] Guest: 2024-12-04 21:06:43.339057365 +0000 UTC Remote: 2024-12-04 21:06:43.263696632 +0000 UTC m=+28.158047479 (delta=75.360733ms)
	I1204 21:06:43.367709   69222 fix.go:200] guest clock delta is within tolerance: 75.360733ms
	I1204 21:06:43.367715   69222 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 26.522059024s
	I1204 21:06:43.367737   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:43.368013   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:06:43.370664   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.371064   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.371099   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.371214   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:43.371718   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:43.371904   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:06:43.372004   69222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:06:43.372036   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:43.372118   69222 ssh_runner.go:195] Run: cat /version.json
	I1204 21:06:43.372136   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:06:43.374740   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.375022   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.375191   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.375228   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.375349   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:43.375463   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:43.375496   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:43.375519   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:43.375682   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:43.375698   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:06:43.375885   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:06:43.375879   69222 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:06:43.375983   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:06:43.376147   69222 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:06:43.474893   69222 ssh_runner.go:195] Run: systemctl --version
	I1204 21:06:43.481675   69222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:06:43.637790   69222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:06:43.643301   69222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:06:43.643361   69222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:06:43.661405   69222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:06:43.661425   69222 start.go:495] detecting cgroup driver to use...
	I1204 21:06:43.661475   69222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:06:43.675850   69222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:06:43.690164   69222 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:06:43.690216   69222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:06:43.703888   69222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:06:43.715961   69222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:06:43.827666   69222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:06:43.974905   69222 docker.go:233] disabling docker service ...
	I1204 21:06:43.974987   69222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:06:43.989204   69222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:06:44.001945   69222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:06:44.137922   69222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:06:44.261842   69222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:06:44.275977   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:06:44.294811   69222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:06:44.294888   69222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:06:44.304411   69222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:06:44.304504   69222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:06:44.315164   69222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:06:44.325137   69222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:06:44.334972   69222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:06:44.345555   69222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:06:44.355058   69222 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:06:44.355112   69222 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:06:44.368844   69222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:06:44.378397   69222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:06:44.495610   69222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:06:44.596209   69222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:06:44.596303   69222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:06:44.601295   69222 start.go:563] Will wait 60s for crictl version
	I1204 21:06:44.601359   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:44.604857   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:06:44.649216   69222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:06:44.649332   69222 ssh_runner.go:195] Run: crio --version
	I1204 21:06:44.677725   69222 ssh_runner.go:195] Run: crio --version
	I1204 21:06:44.707640   69222 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:06:44.708834   69222 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:06:44.711767   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:44.712131   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:06:33 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:06:44.712165   69222 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:06:44.712456   69222 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:06:44.716304   69222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:06:44.729859   69222 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:06:44.729958   69222 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:06:44.729997   69222 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:06:44.768312   69222 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:06:44.768374   69222 ssh_runner.go:195] Run: which lz4
	I1204 21:06:44.772422   69222 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:06:44.776884   69222 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:06:44.776919   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:06:46.270418   69222 crio.go:462] duration metric: took 1.498014713s to copy over tarball
	I1204 21:06:46.270523   69222 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:06:49.022179   69222 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751600689s)
	I1204 21:06:49.022211   69222 crio.go:469] duration metric: took 2.751756363s to extract the tarball
	I1204 21:06:49.022219   69222 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:06:49.069191   69222 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:06:49.119895   69222 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:06:49.119925   69222 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:06:49.120040   69222 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:06:49.120091   69222 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.120129   69222 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.120172   69222 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.120189   69222 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.120202   69222 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.120044   69222 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:06:49.120040   69222 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.121699   69222 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.121714   69222 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:06:49.121734   69222 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:06:49.121745   69222 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.121760   69222 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.121703   69222 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.121776   69222 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.121702   69222 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.292014   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.300631   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.317617   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.319887   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:06:49.329887   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.342345   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.358796   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.367075   69222 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:06:49.367144   69222 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.367195   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.408913   69222 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:06:49.408972   69222 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.409025   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.439425   69222 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:06:49.439486   69222 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.439560   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.481081   69222 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:06:49.481140   69222 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.481195   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.481436   69222 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:06:49.481469   69222 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:06:49.481515   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.494910   69222 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:06:49.494946   69222 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.494989   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.496771   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.496784   69222 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:06:49.496814   69222 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.496833   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.496851   69222 ssh_runner.go:195] Run: which crictl
	I1204 21:06:49.496859   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.496906   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.496947   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:06:49.499513   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.624941   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.642727   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.642751   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.642939   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:06:49.675302   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.675320   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.675407   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.800438   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.800504   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:06:49.841025   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:06:49.841111   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:06:49.868791   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:06:49.868977   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:06:49.869800   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:06:49.970696   69222 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:06:49.981719   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:06:50.008456   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:06:50.019727   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:06:50.040516   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:06:50.045955   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:06:50.047300   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:06:50.066331   69222 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:06:50.128639   69222 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:06:50.277690   69222 cache_images.go:92] duration metric: took 1.157745214s to LoadCachedImages
	W1204 21:06:50.277790   69222 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1204 21:06:50.277809   69222 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:06:50.277937   69222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:06:50.278021   69222 ssh_runner.go:195] Run: crio config
	I1204 21:06:50.334924   69222 cni.go:84] Creating CNI manager for ""
	I1204 21:06:50.334950   69222 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:06:50.334960   69222 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:06:50.334983   69222 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:06:50.335166   69222 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:06:50.335247   69222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:06:50.345393   69222 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:06:50.345457   69222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:06:50.354775   69222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:06:50.371821   69222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:06:50.390220   69222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:06:50.409049   69222 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:06:50.414012   69222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:06:50.430507   69222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:06:50.563463   69222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:06:50.582049   69222 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:06:50.582078   69222 certs.go:194] generating shared ca certs ...
	I1204 21:06:50.582099   69222 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:50.582253   69222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:06:50.582291   69222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:06:50.582301   69222 certs.go:256] generating profile certs ...
	I1204 21:06:50.582350   69222 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:06:50.582386   69222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.crt with IP's: []
	I1204 21:06:50.775859   69222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.crt ...
	I1204 21:06:50.775897   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.crt: {Name:mk4b6f690f016a847b110a76e18f0dc1d7b8a24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:50.776133   69222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key ...
	I1204 21:06:50.776159   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key: {Name:mk3687403502e4e0aa4c2eb38d132dbda93c6a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:50.776307   69222 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:06:50.776333   69222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt.8d7b2cb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.180]
	I1204 21:06:50.912015   69222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt.8d7b2cb2 ...
	I1204 21:06:50.912044   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt.8d7b2cb2: {Name:mkbe4ae9e02ebcd7eea1fb511860448470582c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:50.939876   69222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2 ...
	I1204 21:06:50.939910   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2: {Name:mkadba9c94ef29aa64ff9ddb830ba3159b6a84a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:50.940067   69222 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt.8d7b2cb2 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt
	I1204 21:06:50.940202   69222 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2 -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key
	I1204 21:06:50.940310   69222 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:06:50.940338   69222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt with IP's: []
	I1204 21:06:51.019905   69222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt ...
	I1204 21:06:51.019938   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt: {Name:mk47201a740c5e6cd716a44b061469087b1e1ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:51.020154   69222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key ...
	I1204 21:06:51.020185   69222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key: {Name:mkee5d8b2b5c37b64dd587a479161269c8adb4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:06:51.020419   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:06:51.020460   69222 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:06:51.020478   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:06:51.020502   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:06:51.020544   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:06:51.020573   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:06:51.020624   69222 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:06:51.021265   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:06:51.047737   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:06:51.070262   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:06:51.093743   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:06:51.116959   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:06:51.166984   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:06:51.189609   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:06:51.211696   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:06:51.298596   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:06:51.323184   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:06:51.345757   69222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:06:51.374481   69222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:06:51.392993   69222 ssh_runner.go:195] Run: openssl version
	I1204 21:06:51.399426   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:06:51.413149   69222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:06:51.418572   69222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:06:51.418647   69222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:06:51.428419   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:06:51.444817   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:06:51.459651   69222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:06:51.465448   69222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:06:51.465521   69222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:06:51.473889   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:06:51.489586   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:06:51.507873   69222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:06:51.514292   69222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:06:51.514362   69222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:06:51.520540   69222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:06:51.532003   69222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:06:51.535999   69222 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:06:51.536057   69222 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:06:51.536119   69222 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:06:51.536212   69222 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:06:51.579475   69222 cri.go:89] found id: ""
	I1204 21:06:51.579556   69222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:06:51.589328   69222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:06:51.598271   69222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:06:51.607575   69222 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:06:51.607598   69222 kubeadm.go:157] found existing configuration files:
	
	I1204 21:06:51.607644   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:06:51.616033   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:06:51.616083   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:06:51.624710   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:06:51.633048   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:06:51.633119   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:06:51.641777   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:06:51.650255   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:06:51.650290   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:06:51.659456   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:06:51.667485   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:06:51.667543   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:06:51.676832   69222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:06:51.790348   69222 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:06:51.790524   69222 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:06:51.945373   69222 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:06:51.945522   69222 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:06:51.945673   69222 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:06:52.130269   69222 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:06:52.212375   69222 out.go:235]   - Generating certificates and keys ...
	I1204 21:06:52.212489   69222 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:06:52.212579   69222 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:06:52.433869   69222 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:06:52.491821   69222 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:06:52.563296   69222 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:06:52.919905   69222 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:06:53.185061   69222 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:06:53.185463   69222 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	I1204 21:06:53.325067   69222 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:06:53.325370   69222 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	I1204 21:06:53.532760   69222 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:06:53.799210   69222 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:06:53.987903   69222 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:06:53.988313   69222 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:06:54.369145   69222 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:06:54.759695   69222 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:06:54.910421   69222 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:06:55.171196   69222 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:06:55.190958   69222 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:06:55.191118   69222 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:06:55.191188   69222 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:06:55.331366   69222 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:06:55.334235   69222 out.go:235]   - Booting up control plane ...
	I1204 21:06:55.334357   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:06:55.342055   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:06:55.343166   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:06:55.344067   69222 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:06:55.348403   69222 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:07:35.341378   69222 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:07:35.341860   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:35.342156   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:40.342053   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:40.342315   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:07:50.340993   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:07:50.341244   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:08:10.340409   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:08:10.340653   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:08:50.342019   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:08:50.342272   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:08:50.342288   69222 kubeadm.go:310] 
	I1204 21:08:50.342338   69222 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:08:50.342397   69222 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:08:50.342407   69222 kubeadm.go:310] 
	I1204 21:08:50.342461   69222 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:08:50.342512   69222 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:08:50.342676   69222 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:08:50.342701   69222 kubeadm.go:310] 
	I1204 21:08:50.342853   69222 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:08:50.342908   69222 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:08:50.342960   69222 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:08:50.342968   69222 kubeadm.go:310] 
	I1204 21:08:50.343114   69222 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:08:50.343231   69222 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:08:50.343242   69222 kubeadm.go:310] 
	I1204 21:08:50.343412   69222 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:08:50.343533   69222 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:08:50.343633   69222 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:08:50.343729   69222 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:08:50.343741   69222 kubeadm.go:310] 
	I1204 21:08:50.344141   69222 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:08:50.344280   69222 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:08:50.344414   69222 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:08:50.344545   69222 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-082859] and IPs [192.168.72.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:08:50.344587   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:08:51.741208   69222 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.396590033s)
	I1204 21:08:51.741291   69222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:08:51.756061   69222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:08:51.766202   69222 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:08:51.766221   69222 kubeadm.go:157] found existing configuration files:
	
	I1204 21:08:51.766260   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:08:51.776249   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:08:51.776310   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:08:51.789613   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:08:51.801082   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:08:51.801137   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:08:51.813206   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:08:51.822950   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:08:51.823019   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:08:51.832564   69222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:08:51.841685   69222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:08:51.841732   69222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:08:51.853275   69222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:08:51.935638   69222 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:08:51.935736   69222 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:08:52.085211   69222 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:08:52.085351   69222 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:08:52.085529   69222 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:08:52.282835   69222 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:08:52.284892   69222 out.go:235]   - Generating certificates and keys ...
	I1204 21:08:52.285019   69222 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:08:52.285109   69222 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:08:52.286371   69222 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:08:52.286663   69222 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:08:52.287124   69222 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:08:52.287552   69222 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:08:52.288219   69222 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:08:52.288755   69222 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:08:52.289431   69222 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:08:52.290061   69222 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:08:52.290133   69222 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:08:52.290217   69222 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:08:52.412015   69222 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:08:52.642300   69222 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:08:52.731463   69222 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:08:52.918750   69222 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:08:52.939876   69222 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:08:52.941189   69222 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:08:52.941257   69222 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:08:53.113741   69222 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:08:53.115073   69222 out.go:235]   - Booting up control plane ...
	I1204 21:08:53.115198   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:08:53.134630   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:08:53.136000   69222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:08:53.136991   69222 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:08:53.140436   69222 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:09:33.143579   69222 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:09:33.143833   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:09:33.144086   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:09:38.144594   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:09:38.144820   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:09:48.145338   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:09:48.145589   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:10:08.145027   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:10:08.145275   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:10:48.145430   69222 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:10:48.145680   69222 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:10:48.145704   69222 kubeadm.go:310] 
	I1204 21:10:48.145757   69222 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:10:48.145834   69222 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:10:48.145851   69222 kubeadm.go:310] 
	I1204 21:10:48.145894   69222 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:10:48.145932   69222 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:10:48.146067   69222 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:10:48.146095   69222 kubeadm.go:310] 
	I1204 21:10:48.146235   69222 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:10:48.146287   69222 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:10:48.146332   69222 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:10:48.146347   69222 kubeadm.go:310] 
	I1204 21:10:48.146493   69222 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:10:48.146615   69222 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:10:48.146626   69222 kubeadm.go:310] 
	I1204 21:10:48.146788   69222 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:10:48.146912   69222 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:10:48.147012   69222 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:10:48.147123   69222 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:10:48.147134   69222 kubeadm.go:310] 
	I1204 21:10:48.147846   69222 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:10:48.147922   69222 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:10:48.147979   69222 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:10:48.148058   69222 kubeadm.go:394] duration metric: took 3m56.612005503s to StartCluster
	I1204 21:10:48.148114   69222 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:10:48.148181   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:10:48.211992   69222 cri.go:89] found id: ""
	I1204 21:10:48.212019   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.212027   69222 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:10:48.212034   69222 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:10:48.212095   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:10:48.243824   69222 cri.go:89] found id: ""
	I1204 21:10:48.243850   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.243858   69222 logs.go:284] No container was found matching "etcd"
	I1204 21:10:48.243864   69222 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:10:48.243913   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:10:48.274255   69222 cri.go:89] found id: ""
	I1204 21:10:48.274287   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.274296   69222 logs.go:284] No container was found matching "coredns"
	I1204 21:10:48.274302   69222 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:10:48.274352   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:10:48.304084   69222 cri.go:89] found id: ""
	I1204 21:10:48.304119   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.304126   69222 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:10:48.304132   69222 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:10:48.304179   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:10:48.335786   69222 cri.go:89] found id: ""
	I1204 21:10:48.335815   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.335822   69222 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:10:48.335829   69222 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:10:48.335882   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:10:48.367275   69222 cri.go:89] found id: ""
	I1204 21:10:48.367305   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.367313   69222 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:10:48.367319   69222 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:10:48.367365   69222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:10:48.399155   69222 cri.go:89] found id: ""
	I1204 21:10:48.399187   69222 logs.go:282] 0 containers: []
	W1204 21:10:48.399201   69222 logs.go:284] No container was found matching "kindnet"
	I1204 21:10:48.399212   69222 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:10:48.399222   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:10:48.504270   69222 logs.go:123] Gathering logs for container status ...
	I1204 21:10:48.504312   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:10:48.540719   69222 logs.go:123] Gathering logs for kubelet ...
	I1204 21:10:48.540745   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:10:48.588622   69222 logs.go:123] Gathering logs for dmesg ...
	I1204 21:10:48.588652   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:10:48.601088   69222 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:10:48.601111   69222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:10:48.710246   69222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1204 21:10:48.710270   69222 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:10:48.710334   69222 out.go:270] * 
	* 
	W1204 21:10:48.710397   69222 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:10:48.710417   69222 out.go:270] * 
	* 
	W1204 21:10:48.711272   69222 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:10:48.714337   69222 out.go:201] 
	W1204 21:10:48.715712   69222 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:10:48.715763   69222 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:10:48.715788   69222 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:10:48.717396   69222 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 6 (225.285226ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:10:48.982163   74531 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-082859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-534766 --alsologtostderr -v=3
E1204 21:09:02.692527   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-534766 --alsologtostderr -v=3: exit status 82 (2m0.537283046s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-534766"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:09:00.624233   73861 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:09:00.624354   73861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:09:00.624365   73861 out.go:358] Setting ErrFile to fd 2...
	I1204 21:09:00.624373   73861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:09:00.624688   73861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:09:00.624978   73861 out.go:352] Setting JSON to false
	I1204 21:09:00.625076   73861 mustload.go:65] Loading cluster: no-preload-534766
	I1204 21:09:00.625602   73861 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:09:00.625701   73861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:09:00.625909   73861 mustload.go:65] Loading cluster: no-preload-534766
	I1204 21:09:00.626080   73861 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:09:00.626122   73861 stop.go:39] StopHost: no-preload-534766
	I1204 21:09:00.626714   73861 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:09:00.626774   73861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:09:00.648649   73861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:09:00.649273   73861 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:09:00.650095   73861 main.go:141] libmachine: Using API Version  1
	I1204 21:09:00.650120   73861 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:09:00.650526   73861 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:09:00.653200   73861 out.go:177] * Stopping node "no-preload-534766"  ...
	I1204 21:09:00.654692   73861 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 21:09:00.654746   73861 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:09:00.655095   73861 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 21:09:00.655125   73861 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:09:00.658965   73861 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:09:00.659451   73861 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:07:21 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:09:00.659488   73861 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:09:00.659695   73861 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:09:00.659915   73861 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:09:00.660083   73861 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:09:00.660285   73861 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:09:00.770558   73861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 21:09:00.834108   73861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 21:09:00.900601   73861 main.go:141] libmachine: Stopping "no-preload-534766"...
	I1204 21:09:00.900638   73861 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:09:00.902368   73861 main.go:141] libmachine: (no-preload-534766) Calling .Stop
	I1204 21:09:00.905822   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 0/120
	I1204 21:09:01.907405   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 1/120
	I1204 21:09:02.908773   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 2/120
	I1204 21:09:03.910577   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 3/120
	I1204 21:09:04.912216   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 4/120
	I1204 21:09:05.914563   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 5/120
	I1204 21:09:06.915982   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 6/120
	I1204 21:09:07.917417   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 7/120
	I1204 21:09:08.919099   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 8/120
	I1204 21:09:09.920755   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 9/120
	I1204 21:09:10.922817   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 10/120
	I1204 21:09:11.924197   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 11/120
	I1204 21:09:12.925628   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 12/120
	I1204 21:09:13.927009   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 13/120
	I1204 21:09:14.928350   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 14/120
	I1204 21:09:15.930286   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 15/120
	I1204 21:09:16.931659   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 16/120
	I1204 21:09:17.933145   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 17/120
	I1204 21:09:18.934542   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 18/120
	I1204 21:09:19.935879   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 19/120
	I1204 21:09:20.938320   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 20/120
	I1204 21:09:21.939891   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 21/120
	I1204 21:09:22.941865   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 22/120
	I1204 21:09:23.943600   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 23/120
	I1204 21:09:24.946147   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 24/120
	I1204 21:09:25.947974   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 25/120
	I1204 21:09:26.949567   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 26/120
	I1204 21:09:27.951328   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 27/120
	I1204 21:09:28.952852   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 28/120
	I1204 21:09:29.954122   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 29/120
	I1204 21:09:30.956308   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 30/120
	I1204 21:09:31.958048   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 31/120
	I1204 21:09:32.959605   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 32/120
	I1204 21:09:33.961152   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 33/120
	I1204 21:09:34.963315   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 34/120
	I1204 21:09:35.965289   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 35/120
	I1204 21:09:36.966723   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 36/120
	I1204 21:09:37.968146   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 37/120
	I1204 21:09:38.969528   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 38/120
	I1204 21:09:39.970701   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 39/120
	I1204 21:09:40.972710   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 40/120
	I1204 21:09:41.974256   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 41/120
	I1204 21:09:42.975726   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 42/120
	I1204 21:09:43.977329   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 43/120
	I1204 21:09:44.979510   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 44/120
	I1204 21:09:45.981834   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 45/120
	I1204 21:09:46.983077   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 46/120
	I1204 21:09:47.984565   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 47/120
	I1204 21:09:48.985845   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 48/120
	I1204 21:09:49.987232   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 49/120
	I1204 21:09:50.989487   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 50/120
	I1204 21:09:51.990741   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 51/120
	I1204 21:09:52.992262   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 52/120
	I1204 21:09:53.993678   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 53/120
	I1204 21:09:54.995207   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 54/120
	I1204 21:09:55.997105   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 55/120
	I1204 21:09:56.998969   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 56/120
	I1204 21:09:58.000267   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 57/120
	I1204 21:09:59.001700   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 58/120
	I1204 21:10:00.002857   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 59/120
	I1204 21:10:01.005075   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 60/120
	I1204 21:10:02.006175   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 61/120
	I1204 21:10:03.007360   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 62/120
	I1204 21:10:04.008408   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 63/120
	I1204 21:10:05.009978   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 64/120
	I1204 21:10:06.011563   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 65/120
	I1204 21:10:07.013241   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 66/120
	I1204 21:10:08.014469   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 67/120
	I1204 21:10:09.015800   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 68/120
	I1204 21:10:10.017703   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 69/120
	I1204 21:10:11.019644   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 70/120
	I1204 21:10:12.021131   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 71/120
	I1204 21:10:13.022502   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 72/120
	I1204 21:10:14.023534   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 73/120
	I1204 21:10:15.024887   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 74/120
	I1204 21:10:16.026773   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 75/120
	I1204 21:10:17.028257   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 76/120
	I1204 21:10:18.029736   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 77/120
	I1204 21:10:19.031210   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 78/120
	I1204 21:10:20.032702   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 79/120
	I1204 21:10:21.034962   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 80/120
	I1204 21:10:22.036336   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 81/120
	I1204 21:10:23.037694   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 82/120
	I1204 21:10:24.039024   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 83/120
	I1204 21:10:25.040523   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 84/120
	I1204 21:10:26.042002   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 85/120
	I1204 21:10:27.043335   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 86/120
	I1204 21:10:28.045350   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 87/120
	I1204 21:10:29.046640   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 88/120
	I1204 21:10:30.047923   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 89/120
	I1204 21:10:31.050251   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 90/120
	I1204 21:10:32.051614   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 91/120
	I1204 21:10:33.052818   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 92/120
	I1204 21:10:34.054176   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 93/120
	I1204 21:10:35.055344   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 94/120
	I1204 21:10:36.056834   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 95/120
	I1204 21:10:37.058234   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 96/120
	I1204 21:10:38.059515   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 97/120
	I1204 21:10:39.060880   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 98/120
	I1204 21:10:40.062155   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 99/120
	I1204 21:10:41.064318   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 100/120
	I1204 21:10:42.065767   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 101/120
	I1204 21:10:43.067165   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 102/120
	I1204 21:10:44.068641   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 103/120
	I1204 21:10:45.069811   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 104/120
	I1204 21:10:46.071609   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 105/120
	I1204 21:10:47.072714   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 106/120
	I1204 21:10:48.074121   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 107/120
	I1204 21:10:49.075849   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 108/120
	I1204 21:10:50.077203   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 109/120
	I1204 21:10:51.079194   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 110/120
	I1204 21:10:52.080582   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 111/120
	I1204 21:10:53.081995   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 112/120
	I1204 21:10:54.083307   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 113/120
	I1204 21:10:55.084666   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 114/120
	I1204 21:10:56.086598   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 115/120
	I1204 21:10:57.088148   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 116/120
	I1204 21:10:58.089489   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 117/120
	I1204 21:10:59.090877   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 118/120
	I1204 21:11:00.092482   73861 main.go:141] libmachine: (no-preload-534766) Waiting for machine to stop 119/120
	I1204 21:11:01.093808   73861 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 21:11:01.093866   73861 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1204 21:11:01.095958   73861 out.go:201] 
	W1204 21:11:01.097311   73861 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1204 21:11:01.097327   73861 out.go:270] * 
	* 
	W1204 21:11:01.100045   73861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:11:01.101968   73861 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-534766 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
E1204 21:11:07.519249   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766: exit status 3 (18.500013275s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:19.603754   74742 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E1204 21:11:19.603770   74742 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-534766" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-566991 --alsologtostderr -v=3
E1204 21:09:38.216203   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.222604   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.234008   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.255427   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.296926   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.378957   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.540555   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:38.862256   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:39.504104   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:40.786407   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:43.347979   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:48.470338   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:50.951094   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:52.902919   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:09:58.711754   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.225409   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.231772   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.243159   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.264521   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.305890   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.387326   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.548859   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:15.870548   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:16.512727   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:17.794030   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:19.193676   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-566991 --alsologtostderr -v=3: exit status 82 (2m0.478112837s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-566991"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:09:10.627167   73975 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:09:10.627273   73975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:09:10.627282   73975 out.go:358] Setting ErrFile to fd 2...
	I1204 21:09:10.627286   73975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:09:10.627508   73975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:09:10.627728   73975 out.go:352] Setting JSON to false
	I1204 21:09:10.627795   73975 mustload.go:65] Loading cluster: embed-certs-566991
	I1204 21:09:10.628268   73975 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:09:10.628373   73975 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:09:10.628607   73975 mustload.go:65] Loading cluster: embed-certs-566991
	I1204 21:09:10.628764   73975 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:09:10.628801   73975 stop.go:39] StopHost: embed-certs-566991
	I1204 21:09:10.629167   73975 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:09:10.629209   73975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:09:10.644220   73975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I1204 21:09:10.644668   73975 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:09:10.645206   73975 main.go:141] libmachine: Using API Version  1
	I1204 21:09:10.645234   73975 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:09:10.645562   73975 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:09:10.647823   73975 out.go:177] * Stopping node "embed-certs-566991"  ...
	I1204 21:09:10.648956   73975 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 21:09:10.648987   73975 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:09:10.649220   73975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 21:09:10.649247   73975 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:09:10.651973   73975 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:09:10.652381   73975 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:07:47 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:09:10.652414   73975 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:09:10.652487   73975 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:09:10.652654   73975 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:09:10.652785   73975 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:09:10.652868   73975 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:09:10.740817   73975 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 21:09:10.798098   73975 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 21:09:10.857393   73975 main.go:141] libmachine: Stopping "embed-certs-566991"...
	I1204 21:09:10.857427   73975 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:09:10.858985   73975 main.go:141] libmachine: (embed-certs-566991) Calling .Stop
	I1204 21:09:10.862370   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 0/120
	I1204 21:09:11.863976   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 1/120
	I1204 21:09:12.865397   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 2/120
	I1204 21:09:13.866889   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 3/120
	I1204 21:09:14.868219   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 4/120
	I1204 21:09:15.870165   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 5/120
	I1204 21:09:16.871711   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 6/120
	I1204 21:09:17.873314   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 7/120
	I1204 21:09:18.874726   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 8/120
	I1204 21:09:19.876202   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 9/120
	I1204 21:09:20.877826   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 10/120
	I1204 21:09:21.879712   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 11/120
	I1204 21:09:22.881875   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 12/120
	I1204 21:09:23.883171   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 13/120
	I1204 21:09:24.885231   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 14/120
	I1204 21:09:25.886689   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 15/120
	I1204 21:09:26.888228   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 16/120
	I1204 21:09:27.889889   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 17/120
	I1204 21:09:28.891295   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 18/120
	I1204 21:09:29.892862   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 19/120
	I1204 21:09:30.895134   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 20/120
	I1204 21:09:31.896641   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 21/120
	I1204 21:09:32.898271   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 22/120
	I1204 21:09:33.899856   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 23/120
	I1204 21:09:34.901277   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 24/120
	I1204 21:09:35.903446   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 25/120
	I1204 21:09:36.904795   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 26/120
	I1204 21:09:37.906202   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 27/120
	I1204 21:09:38.907757   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 28/120
	I1204 21:09:39.909366   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 29/120
	I1204 21:09:40.910798   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 30/120
	I1204 21:09:41.912108   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 31/120
	I1204 21:09:42.914058   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 32/120
	I1204 21:09:43.915863   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 33/120
	I1204 21:09:44.917397   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 34/120
	I1204 21:09:45.919403   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 35/120
	I1204 21:09:46.920813   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 36/120
	I1204 21:09:47.922396   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 37/120
	I1204 21:09:48.923793   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 38/120
	I1204 21:09:49.925323   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 39/120
	I1204 21:09:50.927751   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 40/120
	I1204 21:09:51.929801   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 41/120
	I1204 21:09:52.931914   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 42/120
	I1204 21:09:53.933647   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 43/120
	I1204 21:09:54.935205   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 44/120
	I1204 21:09:55.937359   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 45/120
	I1204 21:09:56.938782   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 46/120
	I1204 21:09:57.940108   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 47/120
	I1204 21:09:58.941557   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 48/120
	I1204 21:09:59.942973   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 49/120
	I1204 21:10:00.945237   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 50/120
	I1204 21:10:01.946434   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 51/120
	I1204 21:10:02.947931   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 52/120
	I1204 21:10:03.949104   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 53/120
	I1204 21:10:04.950503   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 54/120
	I1204 21:10:05.952332   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 55/120
	I1204 21:10:06.953479   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 56/120
	I1204 21:10:07.954744   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 57/120
	I1204 21:10:08.956160   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 58/120
	I1204 21:10:09.957516   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 59/120
	I1204 21:10:10.959412   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 60/120
	I1204 21:10:11.960756   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 61/120
	I1204 21:10:12.962273   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 62/120
	I1204 21:10:13.963552   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 63/120
	I1204 21:10:14.965045   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 64/120
	I1204 21:10:15.966986   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 65/120
	I1204 21:10:16.968570   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 66/120
	I1204 21:10:17.969905   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 67/120
	I1204 21:10:18.971335   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 68/120
	I1204 21:10:19.972850   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 69/120
	I1204 21:10:20.974931   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 70/120
	I1204 21:10:21.976532   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 71/120
	I1204 21:10:22.977857   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 72/120
	I1204 21:10:23.979560   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 73/120
	I1204 21:10:24.981747   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 74/120
	I1204 21:10:25.983650   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 75/120
	I1204 21:10:26.985188   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 76/120
	I1204 21:10:27.986590   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 77/120
	I1204 21:10:28.988030   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 78/120
	I1204 21:10:29.989400   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 79/120
	I1204 21:10:30.991573   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 80/120
	I1204 21:10:31.992817   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 81/120
	I1204 21:10:32.994281   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 82/120
	I1204 21:10:33.995808   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 83/120
	I1204 21:10:34.997313   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 84/120
	I1204 21:10:35.999628   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 85/120
	I1204 21:10:37.000878   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 86/120
	I1204 21:10:38.002228   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 87/120
	I1204 21:10:39.003841   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 88/120
	I1204 21:10:40.005399   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 89/120
	I1204 21:10:41.007495   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 90/120
	I1204 21:10:42.008951   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 91/120
	I1204 21:10:43.010495   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 92/120
	I1204 21:10:44.011917   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 93/120
	I1204 21:10:45.013305   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 94/120
	I1204 21:10:46.015481   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 95/120
	I1204 21:10:47.016870   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 96/120
	I1204 21:10:48.018237   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 97/120
	I1204 21:10:49.019421   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 98/120
	I1204 21:10:50.020868   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 99/120
	I1204 21:10:51.023038   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 100/120
	I1204 21:10:52.024582   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 101/120
	I1204 21:10:53.026170   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 102/120
	I1204 21:10:54.027704   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 103/120
	I1204 21:10:55.029120   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 104/120
	I1204 21:10:56.030952   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 105/120
	I1204 21:10:57.032324   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 106/120
	I1204 21:10:58.033692   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 107/120
	I1204 21:10:59.034986   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 108/120
	I1204 21:11:00.036408   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 109/120
	I1204 21:11:01.038555   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 110/120
	I1204 21:11:02.040179   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 111/120
	I1204 21:11:03.041670   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 112/120
	I1204 21:11:04.043265   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 113/120
	I1204 21:11:05.044974   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 114/120
	I1204 21:11:06.046813   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 115/120
	I1204 21:11:07.048319   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 116/120
	I1204 21:11:08.049789   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 117/120
	I1204 21:11:09.051312   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 118/120
	I1204 21:11:10.052697   73975 main.go:141] libmachine: (embed-certs-566991) Waiting for machine to stop 119/120
	I1204 21:11:11.053411   73975 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 21:11:11.053464   73975 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1204 21:11:11.055313   73975 out.go:201] 
	W1204 21:11:11.056717   73975 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1204 21:11:11.056731   73975 out.go:270] * 
	* 
	W1204 21:11:11.059304   73975 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:11:11.060646   73975 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-566991 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
E1204 21:11:12.873127   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:15.973629   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991: exit status 3 (18.52534381s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:29.587829   74808 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	E1204 21:11:29.587853   74808 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-566991" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-439360 --alsologtostderr -v=3
E1204 21:10:35.719571   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.025349   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.031668   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.042991   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.064301   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.106030   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.187494   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.349161   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:47.671073   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:48.312520   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-439360 --alsologtostderr -v=3: exit status 82 (2m0.500233394s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-439360"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:10:31.344020   74458 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:10:31.344204   74458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:10:31.344218   74458 out.go:358] Setting ErrFile to fd 2...
	I1204 21:10:31.344225   74458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:10:31.344507   74458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:10:31.344822   74458 out.go:352] Setting JSON to false
	I1204 21:10:31.344922   74458 mustload.go:65] Loading cluster: default-k8s-diff-port-439360
	I1204 21:10:31.345489   74458 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:10:31.345606   74458 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:10:31.345861   74458 mustload.go:65] Loading cluster: default-k8s-diff-port-439360
	I1204 21:10:31.346028   74458 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:10:31.346063   74458 stop.go:39] StopHost: default-k8s-diff-port-439360
	I1204 21:10:31.346606   74458 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:10:31.346655   74458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:10:31.362747   74458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I1204 21:10:31.363332   74458 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:10:31.363935   74458 main.go:141] libmachine: Using API Version  1
	I1204 21:10:31.363963   74458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:10:31.364340   74458 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:10:31.366981   74458 out.go:177] * Stopping node "default-k8s-diff-port-439360"  ...
	I1204 21:10:31.368357   74458 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1204 21:10:31.368394   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:10:31.368689   74458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1204 21:10:31.368726   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:10:31.371873   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:10:31.372385   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:09:08 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:10:31.372420   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:10:31.372552   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:10:31.372730   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:10:31.372891   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:10:31.373029   74458 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:10:31.472278   74458 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1204 21:10:31.529832   74458 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1204 21:10:31.586901   74458 main.go:141] libmachine: Stopping "default-k8s-diff-port-439360"...
	I1204 21:10:31.586938   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:10:31.588708   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Stop
	I1204 21:10:31.592808   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 0/120
	I1204 21:10:32.594115   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 1/120
	I1204 21:10:33.595325   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 2/120
	I1204 21:10:34.596768   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 3/120
	I1204 21:10:35.598153   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 4/120
	I1204 21:10:36.600208   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 5/120
	I1204 21:10:37.601651   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 6/120
	I1204 21:10:38.603489   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 7/120
	I1204 21:10:39.604986   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 8/120
	I1204 21:10:40.606487   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 9/120
	I1204 21:10:41.608094   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 10/120
	I1204 21:10:42.609468   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 11/120
	I1204 21:10:43.610759   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 12/120
	I1204 21:10:44.612217   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 13/120
	I1204 21:10:45.613538   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 14/120
	I1204 21:10:46.615252   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 15/120
	I1204 21:10:47.616765   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 16/120
	I1204 21:10:48.618338   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 17/120
	I1204 21:10:49.620058   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 18/120
	I1204 21:10:50.621327   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 19/120
	I1204 21:10:51.623566   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 20/120
	I1204 21:10:52.625009   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 21/120
	I1204 21:10:53.626332   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 22/120
	I1204 21:10:54.628090   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 23/120
	I1204 21:10:55.629799   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 24/120
	I1204 21:10:56.632081   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 25/120
	I1204 21:10:57.633382   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 26/120
	I1204 21:10:58.634726   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 27/120
	I1204 21:10:59.636390   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 28/120
	I1204 21:11:00.637855   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 29/120
	I1204 21:11:01.640205   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 30/120
	I1204 21:11:02.641644   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 31/120
	I1204 21:11:03.643103   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 32/120
	I1204 21:11:04.644620   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 33/120
	I1204 21:11:05.645956   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 34/120
	I1204 21:11:06.648144   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 35/120
	I1204 21:11:07.649465   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 36/120
	I1204 21:11:08.650897   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 37/120
	I1204 21:11:09.652396   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 38/120
	I1204 21:11:10.653725   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 39/120
	I1204 21:11:11.655171   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 40/120
	I1204 21:11:12.656557   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 41/120
	I1204 21:11:13.658194   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 42/120
	I1204 21:11:14.659500   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 43/120
	I1204 21:11:15.661014   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 44/120
	I1204 21:11:16.663022   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 45/120
	I1204 21:11:17.664561   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 46/120
	I1204 21:11:18.665774   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 47/120
	I1204 21:11:19.667191   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 48/120
	I1204 21:11:20.668572   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 49/120
	I1204 21:11:21.670824   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 50/120
	I1204 21:11:22.672465   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 51/120
	I1204 21:11:23.674123   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 52/120
	I1204 21:11:24.676358   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 53/120
	I1204 21:11:25.677990   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 54/120
	I1204 21:11:26.680047   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 55/120
	I1204 21:11:27.681465   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 56/120
	I1204 21:11:28.682946   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 57/120
	I1204 21:11:29.684077   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 58/120
	I1204 21:11:30.685514   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 59/120
	I1204 21:11:31.687987   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 60/120
	I1204 21:11:32.689374   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 61/120
	I1204 21:11:33.690836   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 62/120
	I1204 21:11:34.692119   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 63/120
	I1204 21:11:35.693628   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 64/120
	I1204 21:11:36.695916   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 65/120
	I1204 21:11:37.697249   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 66/120
	I1204 21:11:38.698963   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 67/120
	I1204 21:11:39.700373   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 68/120
	I1204 21:11:40.701962   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 69/120
	I1204 21:11:41.704247   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 70/120
	I1204 21:11:42.705935   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 71/120
	I1204 21:11:43.707434   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 72/120
	I1204 21:11:44.708860   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 73/120
	I1204 21:11:45.710348   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 74/120
	I1204 21:11:46.712589   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 75/120
	I1204 21:11:47.713891   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 76/120
	I1204 21:11:48.715677   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 77/120
	I1204 21:11:49.717089   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 78/120
	I1204 21:11:50.718675   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 79/120
	I1204 21:11:51.721028   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 80/120
	I1204 21:11:52.722413   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 81/120
	I1204 21:11:53.723931   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 82/120
	I1204 21:11:54.725872   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 83/120
	I1204 21:11:55.727297   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 84/120
	I1204 21:11:56.729333   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 85/120
	I1204 21:11:57.731762   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 86/120
	I1204 21:11:58.733301   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 87/120
	I1204 21:11:59.734770   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 88/120
	I1204 21:12:00.736395   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 89/120
	I1204 21:12:01.738561   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 90/120
	I1204 21:12:02.740224   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 91/120
	I1204 21:12:03.741727   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 92/120
	I1204 21:12:04.743180   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 93/120
	I1204 21:12:05.744507   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 94/120
	I1204 21:12:06.746262   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 95/120
	I1204 21:12:07.747634   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 96/120
	I1204 21:12:08.748926   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 97/120
	I1204 21:12:09.750206   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 98/120
	I1204 21:12:10.751713   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 99/120
	I1204 21:12:11.753856   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 100/120
	I1204 21:12:12.755439   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 101/120
	I1204 21:12:13.756946   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 102/120
	I1204 21:12:14.758366   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 103/120
	I1204 21:12:15.759588   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 104/120
	I1204 21:12:16.761613   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 105/120
	I1204 21:12:17.762963   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 106/120
	I1204 21:12:18.764268   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 107/120
	I1204 21:12:19.765533   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 108/120
	I1204 21:12:20.766608   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 109/120
	I1204 21:12:21.769007   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 110/120
	I1204 21:12:22.770496   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 111/120
	I1204 21:12:23.771836   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 112/120
	I1204 21:12:24.773276   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 113/120
	I1204 21:12:25.774745   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 114/120
	I1204 21:12:26.776762   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 115/120
	I1204 21:12:27.778126   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 116/120
	I1204 21:12:28.779531   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 117/120
	I1204 21:12:29.780745   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 118/120
	I1204 21:12:30.782165   74458 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for machine to stop 119/120
	I1204 21:12:31.783238   74458 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1204 21:12:31.783306   74458 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1204 21:12:31.785341   74458 out.go:201] 
	W1204 21:12:31.786565   74458 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1204 21:12:31.786586   74458 out.go:270] * 
	* 
	W1204 21:12:31.789238   74458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:12:31.790483   74458 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-439360 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
E1204 21:12:40.754000   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360: exit status 3 (18.435276676s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:12:50.227785   75532 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host
	E1204 21:12:50.227806   75532 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-439360" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-082859 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-082859 create -f testdata/busybox.yaml: exit status 1 (42.19624ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-082859" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-082859 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 6 (214.930845ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:10:49.240561   74569 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-082859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 6 (216.891355ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:10:49.457693   74599 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-082859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-082859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1204 21:10:49.594152   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:52.155986   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:56.200943   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:57.277780   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:00.155852   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-082859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m25.885409525s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-082859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-082859 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-082859 describe deploy/metrics-server -n kube-system: exit status 1 (42.437922ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-082859" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-082859 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 6 (215.632449ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:12:15.601216   75332 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-082859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766: exit status 3 (3.167670983s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:22.771749   74854 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E1204 21:11:22.771769   74854 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-534766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1204 21:11:28.001107   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-534766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1522599s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-534766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766: exit status 3 (3.063258965s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:31.987715   74936 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E1204 21:11:31.987738   74936 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-534766" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
E1204 21:11:31.244904   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.251311   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.262651   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.284021   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.325442   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.406923   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.568511   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:31.890468   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991: exit status 3 (3.167618199s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:32.755730   74966 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	E1204 21:11:32.755750   74966 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-566991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1204 21:11:33.814281   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:36.375609   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:37.162916   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-566991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152994721s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-566991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
E1204 21:11:41.497750   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991: exit status 3 (3.06262047s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:11:41.971750   75090 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	E1204 21:11:41.971772   75090 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-566991" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (764.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1204 21:12:22.077577   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:12:26.275517   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:12:28.203864   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m40.787647574s)

                                                
                                                
-- stdout --
	* [old-k8s-version-082859] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-082859" primary control-plane node in "old-k8s-version-082859" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:12:21.304871   75464 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:12:21.305115   75464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:12:21.305126   75464 out.go:358] Setting ErrFile to fd 2...
	I1204 21:12:21.305131   75464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:12:21.305291   75464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:12:21.305813   75464 out.go:352] Setting JSON to false
	I1204 21:12:21.306691   75464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6891,"bootTime":1733339850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:12:21.306752   75464 start.go:139] virtualization: kvm guest
	I1204 21:12:21.308898   75464 out.go:177] * [old-k8s-version-082859] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:12:21.310357   75464 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:12:21.310376   75464 notify.go:220] Checking for updates...
	I1204 21:12:21.312840   75464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:12:21.313976   75464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:12:21.315206   75464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:12:21.316432   75464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:12:21.317590   75464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:12:21.319111   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:12:21.319520   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:12:21.319585   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:12:21.334823   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I1204 21:12:21.335305   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:12:21.335820   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:12:21.335841   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:12:21.336132   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:12:21.336310   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:12:21.338090   75464 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 21:12:21.339343   75464 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:12:21.339669   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:12:21.339720   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:12:21.354895   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I1204 21:12:21.355324   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:12:21.355806   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:12:21.355828   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:12:21.356136   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:12:21.356311   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:12:21.391987   75464 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:12:21.393117   75464 start.go:297] selected driver: kvm2
	I1204 21:12:21.393137   75464 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:12:21.393252   75464 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:12:21.393894   75464 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:12:21.393954   75464 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:12:21.409518   75464 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:12:21.409911   75464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:12:21.409945   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:12:21.409990   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:12:21.410023   75464 start.go:340] cluster config:
	{Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:12:21.410137   75464 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:12:21.411846   75464 out.go:177] * Starting "old-k8s-version-082859" primary control-plane node in "old-k8s-version-082859" cluster
	I1204 21:12:21.413191   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:12:21.413231   75464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 21:12:21.413242   75464 cache.go:56] Caching tarball of preloaded images
	I1204 21:12:21.413307   75464 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:12:21.413317   75464 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1204 21:12:21.413400   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:12:21.413568   75464 start.go:360] acquireMachinesLock for old-k8s-version-082859: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	* 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	* 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-082859 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (230.226438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25: (1.490884957s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.814253279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347503814227026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cea19f2d-f6bf-47f7-9217-71a100c1430f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.814738429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad480aa6-0c64-4f13-8b6d-58b9935518bf name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.814800978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad480aa6-0c64-4f13-8b6d-58b9935518bf name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.814833687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad480aa6-0c64-4f13-8b6d-58b9935518bf name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.841896206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db6d5dc-0b56-4626-a35b-b6c54c4242f9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.841966788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db6d5dc-0b56-4626-a35b-b6c54c4242f9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.843567870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08d33983-b693-4dca-8eb0-e9387e0124b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.843936153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347503843919281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08d33983-b693-4dca-8eb0-e9387e0124b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.844475251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b94de25e-0959-4d55-a976-25e7551a6bac name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.844536240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b94de25e-0959-4d55-a976-25e7551a6bac name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.844572710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b94de25e-0959-4d55-a976-25e7551a6bac name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.871243745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1003a047-8d04-4b07-87cc-283b8387e650 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.871312027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1003a047-8d04-4b07-87cc-283b8387e650 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.872389014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=055808d2-7a50-45f4-849e-f4c032520cfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.872717094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347503872699797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=055808d2-7a50-45f4-849e-f4c032520cfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.873297244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdeef3ef-3754-4a2e-9c77-0a8ce8232c98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.873356016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdeef3ef-3754-4a2e-9c77-0a8ce8232c98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.873395424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bdeef3ef-3754-4a2e-9c77-0a8ce8232c98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.909417996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d283a9b2-2470-4b90-8b70-79b608edcf3c name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.909513770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d283a9b2-2470-4b90-8b70-79b608edcf3c name=/runtime.v1.RuntimeService/Version
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.910625913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2a3fce8-87d1-4164-997e-95285ce42dcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.910962660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347503910946880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2a3fce8-87d1-4164-997e-95285ce42dcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.911622524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad9b6378-ea81-4b36-8234-ea00096fc214 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.911671907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad9b6378-ea81-4b36-8234-ea00096fc214 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:25:03 old-k8s-version-082859 crio[624]: time="2024-12-04 21:25:03.911702060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad9b6378-ea81-4b36-8234-ea00096fc214 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 4 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063766] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.986133] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929597] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577556] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.172483] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.056938] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054201] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.210243] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.123977] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.239654] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +6.083108] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.059229] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 4 21:17] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.469298] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 4 21:21] systemd-fstab-generator[5120]: Ignoring "noauto" option for root device
	[Dec 4 21:23] systemd-fstab-generator[5401]: Ignoring "noauto" option for root device
	[  +0.064984] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:25:04 up 8 min,  0 users,  load average: 0.08, 0.13, 0.09
	Linux old-k8s-version-082859 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc00045f560, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: net.cgoIPLookup(0xc0002990e0, 0x48ab5d6, 0x3, 0xc00045f560, 0x1f)
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: created by net.cgoLookupIP
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: goroutine 122 [runnable]:
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000896820, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0003344e0, 0x0, 0x0)
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00092c540)
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 04 21:25:01 old-k8s-version-082859 kubelet[5581]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 04 21:25:01 old-k8s-version-082859 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 04 21:25:01 old-k8s-version-082859 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 04 21:25:01 old-k8s-version-082859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 04 21:25:01 old-k8s-version-082859 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 04 21:25:01 old-k8s-version-082859 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 04 21:25:02 old-k8s-version-082859 kubelet[5638]: I1204 21:25:02.093385    5638 server.go:416] Version: v1.20.0
	Dec 04 21:25:02 old-k8s-version-082859 kubelet[5638]: I1204 21:25:02.093817    5638 server.go:837] Client rotation is on, will bootstrap in background
	Dec 04 21:25:02 old-k8s-version-082859 kubelet[5638]: I1204 21:25:02.096462    5638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 04 21:25:02 old-k8s-version-082859 kubelet[5638]: W1204 21:25:02.098420    5638 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 04 21:25:02 old-k8s-version-082859 kubelet[5638]: I1204 21:25:02.098468    5638 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (231.364735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-082859" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (764.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
E1204 21:12:53.182928   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360: exit status 3 (3.167338248s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:12:53.395678   75617 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host
	E1204 21:12:53.395706   75617 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-439360 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1204 21:12:59.085118   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-439360 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152076775s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-439360 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360: exit status 3 (3.063408238s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 21:13:02.611732   75697 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host
	E1204 21:13:02.611776   75697 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.171:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-439360" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1204 21:21:31.244354   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:21:47.225244   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:22:26.275869   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-566991 -n embed-certs-566991
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:30:11.276904681 +0000 UTC m=+5860.376633105
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-566991 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-566991 logs -n 25: (1.958998434s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.670886996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347812670867595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0745ea8a-0224-45bb-8028-ff6ba6f2ba1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.671412526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9ae561e-c069-4923-ad9b-a75a4e9fe47e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.671464877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9ae561e-c069-4923-ad9b-a75a4e9fe47e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.671657642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9ae561e-c069-4923-ad9b-a75a4e9fe47e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.705466896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa3d3e08-041e-403c-a03f-dabd2f7d4372 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.705541458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa3d3e08-041e-403c-a03f-dabd2f7d4372 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.706455596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=299f27a3-0dd2-480c-aefc-11f0e2f28c84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.706903097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347812706880150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=299f27a3-0dd2-480c-aefc-11f0e2f28c84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.707335702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=410bff18-00ab-4ff3-b97a-e772513ac4d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.707399642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=410bff18-00ab-4ff3-b97a-e772513ac4d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.707590030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=410bff18-00ab-4ff3-b97a-e772513ac4d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.741204723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=409c6245-4ec2-4895-9e60-91606b11a698 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.741326160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=409c6245-4ec2-4895-9e60-91606b11a698 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.743105473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ca7b261-bb74-4cfc-b8c0-4cba7baac741 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.743508949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347812743487869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ca7b261-bb74-4cfc-b8c0-4cba7baac741 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.744044638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73d0e76f-8917-48c3-b8b8-18ea570118e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.744097947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73d0e76f-8917-48c3-b8b8-18ea570118e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.744283827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73d0e76f-8917-48c3-b8b8-18ea570118e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.777618225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c4de5aa-a03a-4acf-83e8-5bb5a8228ca5 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.777790246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c4de5aa-a03a-4acf-83e8-5bb5a8228ca5 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.779011721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67148f76-9230-443e-a16e-d8caaa87f06f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.779429064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347812779406338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67148f76-9230-443e-a16e-d8caaa87f06f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.780226912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dcfd3f1-63f9-4bf1-b581-c580f439402f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.780296084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dcfd3f1-63f9-4bf1-b581-c580f439402f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:30:12 embed-certs-566991 crio[714]: time="2024-12-04 21:30:12.780516729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dcfd3f1-63f9-4bf1-b581-c580f439402f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07fb0e487f540       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   c7004039fe1db       storage-provisioner
	dcdf86fdcbf9b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f0080f4d4bd91       busybox
	58b6a0437b843       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   3b1932814b832       coredns-7c65d6cfc9-ct5xn
	a59819135d6bf       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   ee3eaf224a6f9       kube-proxy-4fv72
	05e1d1192577d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c7004039fe1db       storage-provisioner
	8b9e2903e35bf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   6cc002d291a62       kube-apiserver-embed-certs-566991
	e0c420ad52b6e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   5a9f3a9d07f72       kube-scheduler-embed-certs-566991
	e010906440f03       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   6ca74363b0202       etcd-embed-certs-566991
	982e9c35dc47b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   ae0fe4aa2b20e       kube-controller-manager-embed-certs-566991
	
	
	==> coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55688 - 28302 "HINFO IN 913395288040671664.5526945772694932664. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021154818s
	
	
	==> describe nodes <==
	Name:               embed-certs-566991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-566991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=embed-certs-566991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_08_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:08:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-566991
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:27:22 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:27:22 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:27:22 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:27:22 +0000   Wed, 04 Dec 2024 21:16:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    embed-certs-566991
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3fee10e82bd47bb8bf10ff1e185214e
	  System UUID:                d3fee10e-82bd-47bb-8bf1-0ff1e185214e
	  Boot ID:                    cee9d6fe-73e3-42ae-a806-1d244602abe7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-ct5xn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-566991                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-566991             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-566991    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-4fv72                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-566991             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-9vlcd               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-566991 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-566991 event: Registered Node embed-certs-566991 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-566991 event: Registered Node embed-certs-566991 in Controller
	
	
	==> dmesg <==
	[Dec 4 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037566] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.793111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959770] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.549237] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.296770] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.059313] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064720] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.162206] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.154384] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.269166] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +3.995734] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.773508] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +0.062035] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.522842] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.937154] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +3.842225] kauditd_printk_skb: 80 callbacks suppressed
	[ +11.746064] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] <==
	{"level":"info","ts":"2024-12-04T21:16:38.550251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:16:38.550807Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:16:38.550807Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:16:38.551504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.82:2379"}
	{"level":"info","ts":"2024-12-04T21:16:38.552145Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-12-04T21:16:59.025483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"614.072704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-566991\" ","response":"range_response_count:1 size:6600"}
	{"level":"info","ts":"2024-12-04T21:16:59.026765Z","caller":"traceutil/trace.go:171","msg":"trace[2053843806] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-566991; range_end:; response_count:1; response_revision:656; }","duration":"615.343807ms","start":"2024-12-04T21:16:58.411343Z","end":"2024-12-04T21:16:59.026687Z","steps":["trace[2053843806] 'range keys from in-memory index tree'  (duration: 613.944662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:16:59.026908Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:16:58.411322Z","time spent":"615.556844ms","remote":"127.0.0.1:45112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":71,"response count":1,"response size":6624,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-566991\" "}
	{"level":"info","ts":"2024-12-04T21:17:00.159609Z","caller":"traceutil/trace.go:171","msg":"trace[2081464452] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"142.524244ms","start":"2024-12-04T21:17:00.017065Z","end":"2024-12-04T21:17:00.159589Z","steps":["trace[2081464452] 'process raft request'  (duration: 142.32323ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:17:00.159962Z","caller":"traceutil/trace.go:171","msg":"trace[215694060] linearizableReadLoop","detail":"{readStateIndex:702; appliedIndex:702; }","duration":"111.549577ms","start":"2024-12-04T21:17:00.048397Z","end":"2024-12-04T21:17:00.159947Z","steps":["trace[215694060] 'read index received'  (duration: 111.54364ms)","trace[215694060] 'applied index is now lower than readState.Index'  (duration: 4.84µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:17:00.160221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.805967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" ","response":"range_response_count:1 size:4384"}
	{"level":"info","ts":"2024-12-04T21:17:00.160948Z","caller":"traceutil/trace.go:171","msg":"trace[1348377714] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd; range_end:; response_count:1; response_revision:657; }","duration":"112.539792ms","start":"2024-12-04T21:17:00.048394Z","end":"2024-12-04T21:17:00.160934Z","steps":["trace[1348377714] 'agreement among raft nodes before linearized reading'  (duration: 111.690027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:17:00.548224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.216842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2606902021172045778 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" mod_revision:637 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T21:17:00.548670Z","caller":"traceutil/trace.go:171","msg":"trace[1682785017] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"361.072758ms","start":"2024-12-04T21:17:00.187584Z","end":"2024-12-04T21:17:00.548657Z","steps":["trace[1682785017] 'process raft request'  (duration: 106.717901ms)","trace[1682785017] 'compare'  (duration: 253.015295ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:17:00.548864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:17:00.187523Z","time spent":"361.286741ms","remote":"127.0.0.1:45184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" mod_revision:637 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" > >"}
	{"level":"info","ts":"2024-12-04T21:17:00.852679Z","caller":"traceutil/trace.go:171","msg":"trace[862270397] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"297.705017ms","start":"2024-12-04T21:17:00.554957Z","end":"2024-12-04T21:17:00.852662Z","steps":["trace[862270397] 'process raft request'  (duration: 292.282651ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:17:00.992632Z","caller":"traceutil/trace.go:171","msg":"trace[772154060] linearizableReadLoop","detail":"{readStateIndex:705; appliedIndex:704; }","duration":"136.312907ms","start":"2024-12-04T21:17:00.856297Z","end":"2024-12-04T21:17:00.992610Z","steps":["trace[772154060] 'read index received'  (duration: 135.195544ms)","trace[772154060] 'applied index is now lower than readState.Index'  (duration: 1.116598ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:17:00.992996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.676446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-9vlcd.180e15eb0613a706\" ","response":"range_response_count:1 size:942"}
	{"level":"info","ts":"2024-12-04T21:17:00.993085Z","caller":"traceutil/trace.go:171","msg":"trace[1628218802] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-9vlcd.180e15eb0613a706; range_end:; response_count:1; response_revision:660; }","duration":"136.779752ms","start":"2024-12-04T21:17:00.856293Z","end":"2024-12-04T21:17:00.993073Z","steps":["trace[1628218802] 'agreement among raft nodes before linearized reading'  (duration: 136.527609ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:17:00.993243Z","caller":"traceutil/trace.go:171","msg":"trace[2110794990] transaction","detail":"{read_only:false; response_revision:660; number_of_response:1; }","duration":"434.77056ms","start":"2024-12-04T21:17:00.558462Z","end":"2024-12-04T21:17:00.993232Z","steps":["trace[2110794990] 'process raft request'  (duration: 433.108998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:17:00.993372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:17:00.558450Z","time spent":"434.861821ms","remote":"127.0.0.1:45112","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4325,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" mod_revision:624 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" value_size:4259 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" > >"}
	{"level":"info","ts":"2024-12-04T21:17:20.936585Z","caller":"traceutil/trace.go:171","msg":"trace[20950110] transaction","detail":"{read_only:false; response_revision:677; number_of_response:1; }","duration":"159.874149ms","start":"2024-12-04T21:17:20.776683Z","end":"2024-12-04T21:17:20.936557Z","steps":["trace[20950110] 'process raft request'  (duration: 158.858397ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:26:38.580794Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":900}
	{"level":"info","ts":"2024-12-04T21:26:38.591766Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":900,"took":"10.085105ms","hash":757347438,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2744320,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-04T21:26:38.591896Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":757347438,"revision":900,"compact-revision":-1}
	
	
	==> kernel <==
	 21:30:13 up 14 min,  0 users,  load average: 0.05, 0.09, 0.08
	Linux embed-certs-566991 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] <==
	E1204 21:26:40.763801       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 21:26:40.764064       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:26:40.764977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:26:40.766048       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:27:40.765421       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:27:40.765668       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:27:40.766995       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:27:40.767146       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:27:40.767221       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:27:40.768327       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:29:40.768381       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:29:40.768691       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:29:40.768758       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:29:40.768855       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:29:40.770609       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:29:40.770642       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] <==
	E1204 21:24:43.504055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:24:43.981895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:25:13.512325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:25:13.989066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:25:43.517947       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:25:43.996712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:26:13.523040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:26:14.005555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:26:43.528715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:26:44.016193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:27:13.534681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:14.027306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:27:22.538874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-566991"
	E1204 21:27:43.542168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:44.035559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:27:44.518583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="278.908µs"
	I1204 21:27:58.514385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.772µs"
	E1204 21:28:13.547636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:14.043100       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:28:43.553268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:44.050499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:29:13.559584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:14.057935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:29:43.566322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:44.066303       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:16:41.211860       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:16:41.224770       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.82"]
	E1204 21:16:41.224852       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:16:41.277478       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:16:41.277534       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:16:41.277569       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:16:41.279819       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:16:41.280073       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:16:41.280102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:16:41.281852       1 config.go:199] "Starting service config controller"
	I1204 21:16:41.281905       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:16:41.281949       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:16:41.281988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:16:41.282363       1 config.go:328] "Starting node config controller"
	I1204 21:16:41.282393       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:16:41.382102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:16:41.382158       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:16:41.382613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] <==
	I1204 21:16:37.937128       1 serving.go:386] Generated self-signed cert in-memory
	W1204 21:16:39.682309       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 21:16:39.682450       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 21:16:39.682462       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 21:16:39.682517       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 21:16:39.771034       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 21:16:39.771076       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:16:39.783396       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 21:16:39.783529       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 21:16:39.783571       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 21:16:39.783585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 21:16:39.884317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:29:04 embed-certs-566991 kubelet[923]: E1204 21:29:04.501229     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:29:05 embed-certs-566991 kubelet[923]: E1204 21:29:05.667909     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347745667531629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:05 embed-certs-566991 kubelet[923]: E1204 21:29:05.668189     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347745667531629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:15 embed-certs-566991 kubelet[923]: E1204 21:29:15.670426     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347755670031128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:15 embed-certs-566991 kubelet[923]: E1204 21:29:15.670871     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347755670031128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:16 embed-certs-566991 kubelet[923]: E1204 21:29:16.501081     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:29:25 embed-certs-566991 kubelet[923]: E1204 21:29:25.672235     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347765671767807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:25 embed-certs-566991 kubelet[923]: E1204 21:29:25.672589     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347765671767807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:31 embed-certs-566991 kubelet[923]: E1204 21:29:31.501007     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]: E1204 21:29:35.517246     923 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]: E1204 21:29:35.674112     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347775673866401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:35 embed-certs-566991 kubelet[923]: E1204 21:29:35.674133     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347775673866401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:43 embed-certs-566991 kubelet[923]: E1204 21:29:43.500168     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:29:45 embed-certs-566991 kubelet[923]: E1204 21:29:45.676195     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347785675937543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:45 embed-certs-566991 kubelet[923]: E1204 21:29:45.676481     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347785675937543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:54 embed-certs-566991 kubelet[923]: E1204 21:29:54.500689     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:29:55 embed-certs-566991 kubelet[923]: E1204 21:29:55.681037     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347795678068569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:29:55 embed-certs-566991 kubelet[923]: E1204 21:29:55.681063     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347795678068569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:05 embed-certs-566991 kubelet[923]: E1204 21:30:05.682433     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347805682067634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:05 embed-certs-566991 kubelet[923]: E1204 21:30:05.682986     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347805682067634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:07 embed-certs-566991 kubelet[923]: E1204 21:30:07.501800     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	
	
	==> storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] <==
	I1204 21:16:41.007625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1204 21:17:11.011487       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] <==
	I1204 21:17:11.800491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:17:11.811635       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:17:11.811784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:17:29.216416       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:17:29.216801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d!
	I1204 21:17:29.217240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9bda624b-bdab-4775-8dcf-34ac86d286a1", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d became leader
	I1204 21:17:29.320281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-566991 -n embed-certs-566991
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-566991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9vlcd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd: exit status 1 (65.343762ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9vlcd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1204 21:22:40.753705   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:31:29.094735786 +0000 UTC m=+5938.194464209
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-439360 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-439360 logs -n 25: (2.028207772s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.529564187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347890529541118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57f87a95-430f-4a63-b7bd-1e534177d54f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.530219538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d112318-21e9-4745-b1ca-a81c25ea24c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.530273271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d112318-21e9-4745-b1ca-a81c25ea24c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.530467895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d112318-21e9-4745-b1ca-a81c25ea24c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.566530175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03c8bde9-ddd0-4734-a9a1-5086303e8b5d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.566623456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03c8bde9-ddd0-4734-a9a1-5086303e8b5d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.567910353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3cbebd6-574d-4498-96fe-1f66f63ed118 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.568579396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347890568555500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3cbebd6-574d-4498-96fe-1f66f63ed118 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.569184885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1eb28c1-2bc7-48b6-aefd-dd6d32d299c0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.569318916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1eb28c1-2bc7-48b6-aefd-dd6d32d299c0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.569541214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1eb28c1-2bc7-48b6-aefd-dd6d32d299c0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.605840060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c935559-3a3b-4570-bf97-7eaa36cc3abb name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.606048666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c935559-3a3b-4570-bf97-7eaa36cc3abb name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.607134605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f8dffe0-0e05-4938-9dc6-c806cb5780c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.607823574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347890607796486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f8dffe0-0e05-4938-9dc6-c806cb5780c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.608523532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09c8b9f0-9a3f-475f-9573-986a1f42b953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.608619347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09c8b9f0-9a3f-475f-9573-986a1f42b953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.608816204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09c8b9f0-9a3f-475f-9573-986a1f42b953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.638834291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11b2752f-fb38-4508-a6cf-5c67f9fa8153 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.638920709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11b2752f-fb38-4508-a6cf-5c67f9fa8153 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.640029324Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63d4e464-68fd-42f7-b9b7-b4491df41cbc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.640464392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347890640440057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63d4e464-68fd-42f7-b9b7-b4491df41cbc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.640979139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22fbb725-ace0-429e-ad47-db0222dc9e05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.641047353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22fbb725-ace0-429e-ad47-db0222dc9e05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:31:30 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:31:30.641337256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22fbb725-ace0-429e-ad47-db0222dc9e05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af3eab35b327d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e98dddcdd6df6       storage-provisioner
	2a7a1c9e3c85a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b4dfa190fa76c       coredns-7c65d6cfc9-4jmcl
	297685b8e381c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   5d685138375a1       coredns-7c65d6cfc9-tzhgh
	bdc56ecdf83e3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   7735f3c0adf97       kube-proxy-hclwt
	4c7302ea43e02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   d83f071b99d12       etcd-default-k8s-diff-port-439360
	64491d8a2a165       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   6b1ae26a6157a       kube-controller-manager-default-k8s-diff-port-439360
	00b75c8d0ab80       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   47f4c37838aa1       kube-scheduler-default-k8s-diff-port-439360
	8779528fd3a8e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   07bb53eab60e2       kube-apiserver-default-k8s-diff-port-439360
	fb96ebc2d974c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   5d3d425a253a9       kube-apiserver-default-k8s-diff-port-439360
	
	
	==> coredns [297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-439360
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-439360
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=default-k8s-diff-port-439360
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-439360
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:27:30 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:27:30 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:27:30 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:27:30 +0000   Wed, 04 Dec 2024 21:22:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.171
	  Hostname:    default-k8s-diff-port-439360
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92c5859abe734eb49a48473826e74840
	  System UUID:                92c5859a-be73-4eb4-9a48-473826e74840
	  Boot ID:                    160de329-24a2-43ba-a321-6907754d7911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4jmcl                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-tzhgh                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-default-k8s-diff-port-439360                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-439360             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-439360    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-hclwt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-439360             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-v88hj                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node default-k8s-diff-port-439360 event: Registered Node default-k8s-diff-port-439360 in Controller
	
	
	==> dmesg <==
	[  +0.056090] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 21:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.003892] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635306] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.140763] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.066784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076258] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.213791] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.112861] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.292620] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.080350] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +1.613168] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +0.067445] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.540578] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.969077] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 4 21:22] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.077657] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.999173] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.101739] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.800897] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +0.085626] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 4 21:23] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509] <==
	{"level":"info","ts":"2024-12-04T21:22:08.993210Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.171:2380"}
	{"level":"info","ts":"2024-12-04T21:22:08.999208Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.171:2380"}
	{"level":"info","ts":"2024-12-04T21:22:08.999284Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T21:22:09.002608Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a784f2475f6ae727","initial-advertise-peer-urls":["https://192.168.50.171:2380"],"listen-peer-urls":["https://192.168.50.171:2380"],"advertise-client-urls":["https://192.168.50.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T21:22:09.002689Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T21:22:09.234223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:09.234363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:09.234410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 received MsgPreVoteResp from a784f2475f6ae727 at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:09.234477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:09.234559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 received MsgVoteResp from a784f2475f6ae727 at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:09.234589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a784f2475f6ae727 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:09.234658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a784f2475f6ae727 elected leader a784f2475f6ae727 at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:09.238357Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a784f2475f6ae727","local-member-attributes":"{Name:default-k8s-diff-port-439360 ClientURLs:[https://192.168.50.171:2379]}","request-path":"/0/members/a784f2475f6ae727/attributes","cluster-id":"d60dacabf64a723e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T21:22:09.238464Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:09.240533Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.243080Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:09.245386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.171:2379"}
	{"level":"info","ts":"2024-12-04T21:22:09.247508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:09.249265Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d60dacabf64a723e","local-member-id":"a784f2475f6ae727","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.249589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.249652Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.250203Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:09.252401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T21:22:09.268204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:09.270199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:31:31 up 14 min,  0 users,  load average: 0.25, 0.19, 0.18
	Linux default-k8s-diff-port-439360 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b] <==
	W1204 21:27:12.269619       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:27:12.269694       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:27:12.270656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:27:12.271851       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:28:12.271521       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:28:12.271645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1204 21:28:12.272797       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:28:12.272893       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:28:12.272970       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:28:12.274254       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:30:12.273586       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:30:12.274025       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:30:12.274808       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:30:12.274962       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:30:12.275937       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:30:12.276026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa] <==
	W1204 21:22:01.234017       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.252515       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.253855       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.337858       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.363545       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.383740       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.396651       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.412320       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.453990       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.467137       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.503860       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.643814       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.679843       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.682387       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.702494       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.782705       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.828876       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:02.036317       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.434081       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.634515       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.781636       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.798424       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.938879       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:06.036918       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:06.232884       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf] <==
	E1204 21:26:18.245200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:26:18.782674       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:26:48.252322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:26:48.790498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:27:18.260066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:18.798445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:27:30.314934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-439360"
	E1204 21:27:48.267015       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:48.805655       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:28:18.273690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:18.813001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:28:22.742205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="288.663µs"
	I1204 21:28:33.749368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="127.79µs"
	E1204 21:28:48.280702       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:48.822932       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:29:18.289411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:18.832122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:29:48.296909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:48.840804       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:30:18.302931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:30:18.848960       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:30:48.309796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:30:48.858590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:31:18.317663       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:31:18.866317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:22:19.871201       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:22:19.913719       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.171"]
	E1204 21:22:19.913806       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:22:19.976137       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:22:19.976232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:22:19.976267       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:22:19.981289       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:22:19.981540       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:22:19.981563       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:22:19.982878       1 config.go:199] "Starting service config controller"
	I1204 21:22:19.982920       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:22:19.982948       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:22:19.982951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:22:19.983433       1 config.go:328] "Starting node config controller"
	I1204 21:22:19.983444       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:22:20.083234       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:22:20.083251       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:22:20.083497       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617] <==
	E1204 21:22:11.338485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:11.338500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 21:22:11.338550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:11.338651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 21:22:11.338907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.166356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1204 21:22:12.166406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.175786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.175839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.211784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:12.211837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.220728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.220907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.393195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:22:12.393375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.404830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:12.404947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.448613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.448736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.789031       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:22:12.789121       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:22:15.431843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:30:22 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:22.727108    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:30:23 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:23.917311    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347823913689688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:23 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:23.917812    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347823913689688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:33 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:33.920125    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347833919800460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:33 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:33.920200    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347833919800460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:35 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:35.728683    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:30:43 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:43.922121    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347843921782050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:43 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:43.922571    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347843921782050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:49 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:49.727860    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:30:53 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:53.925020    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347853924538099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:53 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:30:53.925356    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347853924538099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:00 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:00.728112    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:31:03 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:03.927131    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347863926584160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:03 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:03.927567    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347863926584160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:11 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:11.727633    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:13.780833    2959 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:13.930543    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347873929733715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:13 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:13.930616    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347873929733715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:23 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:23.727985    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:31:23 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:23.932080    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347883931705071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:23 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:31:23.932130    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347883931705071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4] <==
	I1204 21:22:21.203588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:22:21.222683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:22:21.222752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:22:21.242769       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:22:21.242935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283!
	I1204 21:22:21.244328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36659ab1-e91c-46ee-9596-ccf7a2652af3", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283 became leader
	I1204 21:22:21.343729       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-v88hj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj: exit status 1 (72.046364ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-v88hj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1204 21:23:29.011145   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:24:03.817321   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:24:38.216548   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:24:52.075989   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:24:52.903255   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534766 -n no-preload-534766
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:31:59.304581089 +0000 UTC m=+5968.404309509
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534766 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-534766 logs -n 25: (1.840065794s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.603122733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347920603104609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02d0044b-606e-4d85-96de-3bf419c80862 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.603586739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cb713db-39c7-41b8-a4b5-a3ba07b3188e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.603636631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cb713db-39c7-41b8-a4b5-a3ba07b3188e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.603868725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cb713db-39c7-41b8-a4b5-a3ba07b3188e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.636248611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e138b16-e725-4750-91f1-dbd3d7fd6fbc name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.636323633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e138b16-e725-4750-91f1-dbd3d7fd6fbc name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.637381642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5437ef7-278c-441d-8284-53dae3f9102f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.637692479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347920637673607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5437ef7-278c-441d-8284-53dae3f9102f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.638270734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dec66294-baa5-4bae-bbfb-39529680c7d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.638326051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dec66294-baa5-4bae-bbfb-39529680c7d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.638528622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dec66294-baa5-4bae-bbfb-39529680c7d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.674792038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bade5856-b548-42c7-8dd1-e052166f76a0 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.674862218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bade5856-b548-42c7-8dd1-e052166f76a0 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.676030712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d9de5b3-12cd-4a41-bcb7-4ee3a8b7528f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.676384398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347920676354456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d9de5b3-12cd-4a41-bcb7-4ee3a8b7528f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.676900574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e2e3f73-72a2-45bd-8574-bd7641471c52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.676950686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e2e3f73-72a2-45bd-8574-bd7641471c52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.677133275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e2e3f73-72a2-45bd-8574-bd7641471c52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.706224331Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34f2aafe-b6fb-4dc0-856d-e8c4abdc7ff9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.706285645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34f2aafe-b6fb-4dc0-856d-e8c4abdc7ff9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.707478818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7115943-93d8-4940-bbfd-180178250818 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.707832824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347920707808285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7115943-93d8-4940-bbfd-180178250818 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.708571830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c039d2bd-6463-44da-a97b-e3e2c6f43e46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.708758169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c039d2bd-6463-44da-a97b-e3e2c6f43e46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:32:00 no-preload-534766 crio[714]: time="2024-12-04 21:32:00.709048336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c039d2bd-6463-44da-a97b-e3e2c6f43e46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76b3bd9ced1a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   078786f92c147       storage-provisioner
	b3e6bc78060dc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ce2f25300d182       coredns-7c65d6cfc9-zq88f
	64f833f4d007b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   0ab18f470092e       coredns-7c65d6cfc9-9llkt
	1063f60c44f77       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   83993fb0701cf       kube-proxy-z2n69
	aad3bddff8032       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   9c05ec903ba44       kube-controller-manager-no-preload-534766
	58643fa312719       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   8cf25439a775f       kube-scheduler-no-preload-534766
	6cc79ab1f0984       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   7c8e73432a85d       kube-apiserver-no-preload-534766
	6131d95d46bd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   a28b243929a7c       etcd-no-preload-534766
	b3b4418ff9e99       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   c7ebe613bada2       kube-apiserver-no-preload-534766
	
	
	==> coredns [64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-534766
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-534766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=no-preload-534766
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:22:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-534766
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:31:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:28:02 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:28:02 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:28:02 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:28:02 +0000   Wed, 04 Dec 2024 21:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.174
	  Hostname:    no-preload-534766
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d48f9a54064422eb8005869b2034bb5
	  System UUID:                3d48f9a5-4064-422e-b800-5869b2034bb5
	  Boot ID:                    80129728-9a7d-44f2-b7ef-36ede7cef093
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9llkt                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-zq88f                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-no-preload-534766                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-no-preload-534766             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-no-preload-534766    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-z2n69                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-no-preload-534766             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-24lj8              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node no-preload-534766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node no-preload-534766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node no-preload-534766 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node no-preload-534766 event: Registered Node no-preload-534766 in Controller
	
	
	==> dmesg <==
	[  +0.057234] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.030899] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.024714] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150754] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.068103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065168] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.212692] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.132880] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.285285] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.316815] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.058393] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.498156] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +4.584693] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 4 21:18] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 4 21:22] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.255384] systemd-fstab-generator[3121]: Ignoring "noauto" option for root device
	[  +4.585208] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.481448] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +4.857162] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.097052] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 4 21:24] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e] <==
	{"level":"info","ts":"2024-12-04T21:22:40.543704Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.174:2380"}
	{"level":"info","ts":"2024-12-04T21:22:40.543773Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.174:2380"}
	{"level":"info","ts":"2024-12-04T21:22:40.543845Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T21:22:40.544173Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"2d81e878ac6904a4","initial-advertise-peer-urls":["https://192.168.61.174:2380"],"listen-peer-urls":["https://192.168.61.174:2380"],"advertise-client-urls":["https://192.168.61.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T21:22:40.544345Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T21:22:40.684816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:40.684868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:40.684894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgPreVoteResp from 2d81e878ac6904a4 at term 1"}
	{"level":"info","ts":"2024-12-04T21:22:40.684905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.684911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgVoteResp from 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.684920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.684927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2d81e878ac6904a4 elected leader 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.689093Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.692054Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2d81e878ac6904a4","local-member-attributes":"{Name:no-preload-534766 ClientURLs:[https://192.168.61.174:2379]}","request-path":"/0/members/2d81e878ac6904a4/attributes","cluster-id":"98a332d8ef0073ef","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T21:22:40.692105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:40.692444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:40.693453Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:40.694231Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.174:2379"}
	{"level":"info","ts":"2024-12-04T21:22:40.694876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.694964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.695007Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.695269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:40.695291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:40.698901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:40.700602Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:32:01 up 14 min,  0 users,  load average: 0.08, 0.24, 0.17
	Linux no-preload-534766 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b] <==
	W1204 21:27:43.624327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:27:43.624408       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:27:43.625383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:27:43.625448       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:28:43.625492       1 handler_proxy.go:99] no RequestInfo found in the context
	W1204 21:28:43.625844       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:28:43.625933       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 21:28:43.625952       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:28:43.627310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:28:43.627387       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:30:43.628436       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:30:43.628583       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:30:43.628647       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:30:43.628794       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:30:43.629927       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:30:43.629981       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d] <==
	W1204 21:22:32.662449       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.697477       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.707208       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.735048       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.888521       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.900112       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.910812       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.917452       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.948866       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.967444       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.029951       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.050607       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.092421       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.125369       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.139231       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.160049       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.247254       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.288385       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.367164       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:34.375157       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:35.376590       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:36.923090       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.156071       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.397993       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.423062       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72] <==
	E1204 21:26:49.601380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:26:50.150624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:27:19.611657       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:20.159286       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:27:49.617997       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:27:50.177091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:28:02.683499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-534766"
	E1204 21:28:19.624665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:20.185078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:28:41.412560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="374.867µs"
	E1204 21:28:49.632239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:28:50.193377       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:28:54.400475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="112.578µs"
	E1204 21:29:19.639560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:20.201460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:29:49.646790       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:29:50.215660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:30:19.653289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:30:20.224638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:30:49.660611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:30:50.233043       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:31:19.668484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:31:20.240552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:31:49.676406       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:31:50.260999       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:22:52.025892       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:22:52.040052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.174"]
	E1204 21:22:52.040146       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:22:52.095497       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:22:52.095610       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:22:52.095655       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:22:52.097998       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:22:52.098341       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:22:52.098389       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:22:52.099975       1 config.go:199] "Starting service config controller"
	I1204 21:22:52.100035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:22:52.100097       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:22:52.100114       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:22:52.100655       1 config.go:328] "Starting node config controller"
	I1204 21:22:52.104535       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:22:52.200929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:22:52.201059       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:22:52.205288       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2] <==
	W1204 21:22:42.713479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:42.713693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:42.713890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:42.713993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.552124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 21:22:43.552173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.635367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 21:22:43.635417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.652424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:43.652562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.672299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:43.672446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.713971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:22:43.714080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.741151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 21:22:43.741409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.825359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:43.825454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.849536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 21:22:43.849660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.900262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 21:22:43.900399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.922958       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:22:43.923088       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:22:46.672653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:30:52 no-preload-534766 kubelet[3453]: E1204 21:30:52.384673    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:30:55 no-preload-534766 kubelet[3453]: E1204 21:30:55.538103    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347855537282464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:30:55 no-preload-534766 kubelet[3453]: E1204 21:30:55.538145    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347855537282464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:04 no-preload-534766 kubelet[3453]: E1204 21:31:04.384995    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:31:05 no-preload-534766 kubelet[3453]: E1204 21:31:05.539452    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347865539105219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:05 no-preload-534766 kubelet[3453]: E1204 21:31:05.539486    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347865539105219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:15 no-preload-534766 kubelet[3453]: E1204 21:31:15.542280    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347875541617507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:15 no-preload-534766 kubelet[3453]: E1204 21:31:15.543072    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347875541617507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:17 no-preload-534766 kubelet[3453]: E1204 21:31:17.384508    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:31:25 no-preload-534766 kubelet[3453]: E1204 21:31:25.545137    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347885544329306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:25 no-preload-534766 kubelet[3453]: E1204 21:31:25.545210    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347885544329306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:31 no-preload-534766 kubelet[3453]: E1204 21:31:31.384438    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:31:35 no-preload-534766 kubelet[3453]: E1204 21:31:35.547649    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347895547009260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:35 no-preload-534766 kubelet[3453]: E1204 21:31:35.548164    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347895547009260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:43 no-preload-534766 kubelet[3453]: E1204 21:31:43.384120    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]: E1204 21:31:45.406692    3453 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]: E1204 21:31:45.552342    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347905551918536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:45 no-preload-534766 kubelet[3453]: E1204 21:31:45.552378    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347905551918536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:55 no-preload-534766 kubelet[3453]: E1204 21:31:55.553447    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347915553174276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:55 no-preload-534766 kubelet[3453]: E1204 21:31:55.553484    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733347915553174276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:31:56 no-preload-534766 kubelet[3453]: E1204 21:31:56.384153    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	
	
	==> storage-provisioner [76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177] <==
	I1204 21:22:52.040646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:22:52.052022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:22:52.052085       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:22:52.064452       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:22:52.064892       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab!
	I1204 21:22:52.067548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17dbdf22-1124-494f-b401-be5667445614", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab became leader
	I1204 21:22:52.165664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534766 -n no-preload-534766
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-534766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-24lj8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8: exit status 1 (65.137441ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-24lj8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:25:15.224624   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:25:47.025354   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:26:01.281785   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:26:31.244477   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:26:38.288488   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:26:47.224726   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:27:10.087542   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:27:26.275981   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:27:40.753978   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:27:54.308732   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:27:55.975303   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:28:10.290872   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:28:29.010820   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:29:38.216363   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:29:52.902777   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:30:47.024652   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:31:31.244616   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:31:47.224728   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:32:26.276181   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:32:40.754516   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:33:29.010583   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (213.834812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-082859" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (214.731207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25: (1.465489421s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.080298435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348047080273857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=143d1fe2-5b98-4789-a88f-ab3c74a32de6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.080977029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e10f275e-bf73-4151-92db-e3e578ee5d1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.081043803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e10f275e-bf73-4151-92db-e3e578ee5d1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.081078710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e10f275e-bf73-4151-92db-e3e578ee5d1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.114551600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=712fec1e-08d1-4b6c-870f-909bce137b3d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.114641861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=712fec1e-08d1-4b6c-870f-909bce137b3d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.116038109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d62b5e50-f536-4d2a-b720-9f499caa89ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.116497434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348047116462640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d62b5e50-f536-4d2a-b720-9f499caa89ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.117016892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c619d733-f93b-4359-93f4-e63b4754eb5f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.117102851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c619d733-f93b-4359-93f4-e63b4754eb5f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.117141634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c619d733-f93b-4359-93f4-e63b4754eb5f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.148429311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff3db1ff-d2e4-46ac-b5ba-cc4b8f3a4cf9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.148527466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff3db1ff-d2e4-46ac-b5ba-cc4b8f3a4cf9 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.149700660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=503200e6-c186-4355-94df-1c1e232cbd29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.150129130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348047150102544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=503200e6-c186-4355-94df-1c1e232cbd29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.150888189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b526c0e8-003f-4d54-b775-12a72bfe674b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.150990905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b526c0e8-003f-4d54-b775-12a72bfe674b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.151051861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b526c0e8-003f-4d54-b775-12a72bfe674b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.181589372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42c74ba1-c48f-4d23-b644-52a7c249d536 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.181718035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42c74ba1-c48f-4d23-b644-52a7c249d536 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.183336002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1042426b-2d68-4293-88e4-7890f799f1a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.183721954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348047183696150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1042426b-2d68-4293-88e4-7890f799f1a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.184390515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=748178db-2850-49b5-8c37-8f9e0e815b2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.184444314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=748178db-2850-49b5-8c37-8f9e0e815b2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:34:07 old-k8s-version-082859 crio[624]: time="2024-12-04 21:34:07.184479007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=748178db-2850-49b5-8c37-8f9e0e815b2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 4 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063766] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.986133] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929597] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577556] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.172483] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.056938] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054201] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.210243] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.123977] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.239654] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +6.083108] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.059229] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 4 21:17] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.469298] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 4 21:21] systemd-fstab-generator[5120]: Ignoring "noauto" option for root device
	[Dec 4 21:23] systemd-fstab-generator[5401]: Ignoring "noauto" option for root device
	[  +0.064984] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:34:07 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-082859 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: net.(*sysDialer).dialSerial(0xc000732600, 0x4f7fe40, 0xc0002d7f20, 0xc0009b4d00, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/dial.go:548 +0x152
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: net.(*Dialer).DialContext(0xc00019c9c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000db20c0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000536240, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000db20c0, 0x24, 0x60, 0x7fe70d43f538, 0x118, ...)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: net/http.(*Transport).dial(0xc000b96000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000db20c0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: net/http.(*Transport).dialConn(0xc000b96000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000bd63c0, 0x5, 0xc000db20c0, 0x24, 0x0, 0xc0009fc480, ...)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: net/http.(*Transport).dialConnFor(0xc000b96000, 0xc00073e630)
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]: created by net/http.(*Transport).queueForDial
	Dec 04 21:34:01 old-k8s-version-082859 kubelet[6580]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 04 21:34:01 old-k8s-version-082859 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 04 21:34:01 old-k8s-version-082859 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 04 21:34:02 old-k8s-version-082859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 04 21:34:02 old-k8s-version-082859 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 04 21:34:02 old-k8s-version-082859 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 04 21:34:02 old-k8s-version-082859 kubelet[6589]: I1204 21:34:02.567423    6589 server.go:416] Version: v1.20.0
	Dec 04 21:34:02 old-k8s-version-082859 kubelet[6589]: I1204 21:34:02.567610    6589 server.go:837] Client rotation is on, will bootstrap in background
	Dec 04 21:34:02 old-k8s-version-082859 kubelet[6589]: I1204 21:34:02.569582    6589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 04 21:34:02 old-k8s-version-082859 kubelet[6589]: W1204 21:34:02.570618    6589 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 04 21:34:02 old-k8s-version-082859 kubelet[6589]: I1204 21:34:02.570626    6589 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (229.482764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-082859" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1204 21:30:15.224615   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-566991 -n embed-certs-566991
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:37:32.491463904 +0000 UTC m=+6301.591192321
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-566991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-566991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.073µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-566991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-566991 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-566991 logs -n 25: (1.174757235s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC | 04 Dec 24 21:36 UTC |
	| start   | -p newest-cni-594114 --memory=2200 --alsologtostderr   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC | 04 Dec 24 21:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	| addons  | enable metrics-server -p newest-cni-594114             | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-594114                  | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-594114 --memory=2200 --alsologtostderr   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:37:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:37:21.063723   83039 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:37:21.063944   83039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:37:21.063952   83039 out.go:358] Setting ErrFile to fd 2...
	I1204 21:37:21.063956   83039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:37:21.064119   83039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:37:21.064642   83039 out.go:352] Setting JSON to false
	I1204 21:37:21.065537   83039 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8391,"bootTime":1733339850,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:37:21.065638   83039 start.go:139] virtualization: kvm guest
	I1204 21:37:21.067820   83039 out.go:177] * [newest-cni-594114] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:37:21.069113   83039 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:37:21.069160   83039 notify.go:220] Checking for updates...
	I1204 21:37:21.071520   83039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:37:21.072780   83039 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:37:21.073951   83039 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:37:21.075167   83039 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:37:21.076361   83039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:37:21.077872   83039 config.go:182] Loaded profile config "newest-cni-594114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:37:21.078335   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.078404   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.093825   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I1204 21:37:21.094199   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.094728   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.094748   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.095058   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.095282   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.095532   83039 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:37:21.095817   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.095853   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.110864   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
	I1204 21:37:21.111288   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.111737   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.111767   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.112111   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.112285   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.149168   83039 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:37:21.150302   83039 start.go:297] selected driver: kvm2
	I1204 21:37:21.150317   83039 start.go:901] validating driver "kvm2" against &{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:37:21.150476   83039 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:37:21.151276   83039 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:37:21.151351   83039 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:37:21.168036   83039 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:37:21.168566   83039 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 21:37:21.168609   83039 cni.go:84] Creating CNI manager for ""
	I1204 21:37:21.168673   83039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:37:21.168749   83039 start.go:340] cluster config:
	{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:37:21.168924   83039 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:37:21.170671   83039 out.go:177] * Starting "newest-cni-594114" primary control-plane node in "newest-cni-594114" cluster
	I1204 21:37:21.171910   83039 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:37:21.171942   83039 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:37:21.171949   83039 cache.go:56] Caching tarball of preloaded images
	I1204 21:37:21.172028   83039 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:37:21.172038   83039 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:37:21.172135   83039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json ...
	I1204 21:37:21.172310   83039 start.go:360] acquireMachinesLock for newest-cni-594114: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:37:21.172349   83039 start.go:364] duration metric: took 21.646µs to acquireMachinesLock for "newest-cni-594114"
	I1204 21:37:21.172362   83039 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:37:21.172369   83039 fix.go:54] fixHost starting: 
	I1204 21:37:21.172653   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.172686   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.187443   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I1204 21:37:21.187930   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.188358   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.188381   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.188711   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.188868   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.189014   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:37:21.190399   83039 fix.go:112] recreateIfNeeded on newest-cni-594114: state=Stopped err=<nil>
	I1204 21:37:21.190422   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	W1204 21:37:21.190576   83039 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:37:21.192200   83039 out.go:177] * Restarting existing kvm2 VM for "newest-cni-594114" ...
	I1204 21:37:21.193397   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Start
	I1204 21:37:21.193540   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring networks are active...
	I1204 21:37:21.194310   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring network default is active
	I1204 21:37:21.194636   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring network mk-newest-cni-594114 is active
	I1204 21:37:21.195001   83039 main.go:141] libmachine: (newest-cni-594114) Getting domain xml...
	I1204 21:37:21.195718   83039 main.go:141] libmachine: (newest-cni-594114) Creating domain...
	I1204 21:37:22.429824   83039 main.go:141] libmachine: (newest-cni-594114) Waiting to get IP...
	I1204 21:37:22.430590   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:22.430998   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:22.431068   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:22.430981   83074 retry.go:31] will retry after 229.283383ms: waiting for machine to come up
	I1204 21:37:22.661494   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:22.661874   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:22.661894   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:22.661837   83074 retry.go:31] will retry after 370.269838ms: waiting for machine to come up
	I1204 21:37:23.033408   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:23.033795   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:23.033823   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:23.033766   83074 retry.go:31] will retry after 414.770193ms: waiting for machine to come up
	I1204 21:37:23.450306   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:23.450784   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:23.450814   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:23.450734   83074 retry.go:31] will retry after 588.127921ms: waiting for machine to come up
	I1204 21:37:24.040389   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:24.040944   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:24.040969   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:24.040892   83074 retry.go:31] will retry after 646.42402ms: waiting for machine to come up
	I1204 21:37:24.688457   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:24.689037   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:24.689065   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:24.688969   83074 retry.go:31] will retry after 683.032614ms: waiting for machine to come up
	I1204 21:37:25.373688   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:25.374074   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:25.374096   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:25.374038   83074 retry.go:31] will retry after 883.64786ms: waiting for machine to come up
	I1204 21:37:26.259307   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:26.259867   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:26.259904   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:26.259824   83074 retry.go:31] will retry after 929.533809ms: waiting for machine to come up
	I1204 21:37:27.190699   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:27.191170   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:27.191217   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:27.191128   83074 retry.go:31] will retry after 1.284074253s: waiting for machine to come up
	I1204 21:37:28.477854   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:28.478316   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:28.478345   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:28.478262   83074 retry.go:31] will retry after 1.486229177s: waiting for machine to come up
	I1204 21:37:29.967041   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:29.967572   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:29.967601   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:29.967535   83074 retry.go:31] will retry after 1.93353435s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.076076545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348253076045625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6b046d4-a566-4570-ad0c-4ed1fc38177e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.076874738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc20f379-dd58-45d8-a6c5-a39d4c5a6555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.077091623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc20f379-dd58-45d8-a6c5-a39d4c5a6555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.077432191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc20f379-dd58-45d8-a6c5-a39d4c5a6555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.130714863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79cc2822-97c9-442d-83ff-5ded9c9bb46b name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.130864844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79cc2822-97c9-442d-83ff-5ded9c9bb46b name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.136974137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f354bc4-654a-47e5-b542-a0dac3779ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.137540804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348253137481579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f354bc4-654a-47e5-b542-a0dac3779ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.142407176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf064597-84d6-4b33-b1cb-bf7e1c353eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.142484955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf064597-84d6-4b33-b1cb-bf7e1c353eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.142847228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf064597-84d6-4b33-b1cb-bf7e1c353eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.193005801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7dff5b3-b82c-4207-b83f-8e22804698d1 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.193080035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7dff5b3-b82c-4207-b83f-8e22804698d1 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.198335906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d0015d8-da9a-4311-8978-de625a95a61e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.198873072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348253198840340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d0015d8-da9a-4311-8978-de625a95a61e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.199918480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=003db939-aa6e-466d-9336-6cf0bb4cde1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.199987202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=003db939-aa6e-466d-9336-6cf0bb4cde1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.200189999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=003db939-aa6e-466d-9336-6cf0bb4cde1b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.235434274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9cc1035-65ab-4072-bbce-0c63cb026f69 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.235515442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9cc1035-65ab-4072-bbce-0c63cb026f69 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.236973478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7330a24-6915-498c-ba32-8b5a34669312 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.237422124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348253237396290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7330a24-6915-498c-ba32-8b5a34669312 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.238010644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44ea84bd-61e6-4a4e-8c48-fe8151ca7d4e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.238061251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44ea84bd-61e6-4a4e-8c48-fe8151ca7d4e name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:33 embed-certs-566991 crio[714]: time="2024-12-04 21:37:33.238246349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347031700213364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcdf86fdcbf9bd8a144fff19857f6fb26d62fe7eb52809e9b0fd81f8d41222e6,PodSandboxId:f0080f4d4bd91d17758afd4f1cd9ace3a8edf7607b4dbeca50c05bbbf7ea3e2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733347010789616306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3be8d42-19bc-4bfc-be9a-bf74020438e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78,PodSandboxId:3b1932814b83214d86d7c57fd7aa32f8925c8f3a985df8ac7eb1a12f9eb241b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347008602299697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ct5xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be113b96-b21f-4fd5-8cd9-11b149a0a838,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5,PodSandboxId:ee3eaf224a6f9cc1761003899c7ec1c7708f55a234325a51f5f6a725cf136038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733347000937256830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fv72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b84591-6767-4414-9
869-9d89206a03f2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4,PodSandboxId:c7004039fe1db8e9729ca1177cf50c49c546abce639e2c5af26210d69eec7e2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733347000879361773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8acdb07-16e7-457f-81b8-85416b849
890,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df,PodSandboxId:5a9f3a9d07f72918557f5db4e8b88fd5df0ca4509cc0ec45acd670fad1a8939e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733346997194682947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e62008d66cd7b9cafa6ab59f4f7953,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78,PodSandboxId:6cc002d291a622d58f9b0b04ea2fa8ff34c0131238099b1f091f64d47b9ef684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733346997203377526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a0ccd7666b4a3fcfee2085c0019a8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98,PodSandboxId:6ca74363b0202187d622a59232287afc7c3bf09a71fdee69bfd670bce87e1e41,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733346997182362942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df778708fa62b24cccc5f735818ef924,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9,PodSandboxId:ae0fe4aa2b20e871d3c330515322b68a258a5887520bc1d94daa5d338f934257,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733346997172630584,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-566991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59cd4f639fa768deec1685c42bbf280,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44ea84bd-61e6-4a4e-8c48-fe8151ca7d4e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07fb0e487f540       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   c7004039fe1db       storage-provisioner
	dcdf86fdcbf9b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   f0080f4d4bd91       busybox
	58b6a0437b843       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   3b1932814b832       coredns-7c65d6cfc9-ct5xn
	a59819135d6bf       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   ee3eaf224a6f9       kube-proxy-4fv72
	05e1d1192577d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   c7004039fe1db       storage-provisioner
	8b9e2903e35bf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      20 minutes ago      Running             kube-apiserver            1                   6cc002d291a62       kube-apiserver-embed-certs-566991
	e0c420ad52b6e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      20 minutes ago      Running             kube-scheduler            1                   5a9f3a9d07f72       kube-scheduler-embed-certs-566991
	e010906440f03       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   6ca74363b0202       etcd-embed-certs-566991
	982e9c35dc47b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      20 minutes ago      Running             kube-controller-manager   1                   ae0fe4aa2b20e       kube-controller-manager-embed-certs-566991
	
	
	==> coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55688 - 28302 "HINFO IN 913395288040671664.5526945772694932664. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021154818s
	
	
	==> describe nodes <==
	Name:               embed-certs-566991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-566991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=embed-certs-566991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_08_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:08:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-566991
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:37:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:32:28 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:32:28 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:32:28 +0000   Wed, 04 Dec 2024 21:08:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:32:28 +0000   Wed, 04 Dec 2024 21:16:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    embed-certs-566991
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3fee10e82bd47bb8bf10ff1e185214e
	  System UUID:                d3fee10e-82bd-47bb-8bf1-0ff1e185214e
	  Boot ID:                    cee9d6fe-73e3-42ae-a806-1d244602abe7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-ct5xn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-566991                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-566991             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-566991    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4fv72                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-566991             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-9vlcd               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-566991 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-566991 event: Registered Node embed-certs-566991 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-566991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-566991 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-566991 event: Registered Node embed-certs-566991 in Controller
	
	
	==> dmesg <==
	[Dec 4 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037566] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.793111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959770] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.549237] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.296770] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.059313] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064720] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.162206] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.154384] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.269166] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +3.995734] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.773508] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +0.062035] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.522842] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.937154] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +3.842225] kauditd_printk_skb: 80 callbacks suppressed
	[ +11.746064] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] <==
	{"level":"info","ts":"2024-12-04T21:17:00.160948Z","caller":"traceutil/trace.go:171","msg":"trace[1348377714] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd; range_end:; response_count:1; response_revision:657; }","duration":"112.539792ms","start":"2024-12-04T21:17:00.048394Z","end":"2024-12-04T21:17:00.160934Z","steps":["trace[1348377714] 'agreement among raft nodes before linearized reading'  (duration: 111.690027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:17:00.548224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.216842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2606902021172045778 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" mod_revision:637 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T21:17:00.548670Z","caller":"traceutil/trace.go:171","msg":"trace[1682785017] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"361.072758ms","start":"2024-12-04T21:17:00.187584Z","end":"2024-12-04T21:17:00.548657Z","steps":["trace[1682785017] 'process raft request'  (duration: 106.717901ms)","trace[1682785017] 'compare'  (duration: 253.015295ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:17:00.548864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:17:00.187523Z","time spent":"361.286741ms","remote":"127.0.0.1:45184","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" mod_revision:637 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-fulumh2dkgsqa5qzmmcdlb7tve\" > >"}
	{"level":"info","ts":"2024-12-04T21:17:00.852679Z","caller":"traceutil/trace.go:171","msg":"trace[862270397] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"297.705017ms","start":"2024-12-04T21:17:00.554957Z","end":"2024-12-04T21:17:00.852662Z","steps":["trace[862270397] 'process raft request'  (duration: 292.282651ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:17:00.992632Z","caller":"traceutil/trace.go:171","msg":"trace[772154060] linearizableReadLoop","detail":"{readStateIndex:705; appliedIndex:704; }","duration":"136.312907ms","start":"2024-12-04T21:17:00.856297Z","end":"2024-12-04T21:17:00.992610Z","steps":["trace[772154060] 'read index received'  (duration: 135.195544ms)","trace[772154060] 'applied index is now lower than readState.Index'  (duration: 1.116598ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:17:00.992996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.676446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-9vlcd.180e15eb0613a706\" ","response":"range_response_count:1 size:942"}
	{"level":"info","ts":"2024-12-04T21:17:00.993085Z","caller":"traceutil/trace.go:171","msg":"trace[1628218802] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-9vlcd.180e15eb0613a706; range_end:; response_count:1; response_revision:660; }","duration":"136.779752ms","start":"2024-12-04T21:17:00.856293Z","end":"2024-12-04T21:17:00.993073Z","steps":["trace[1628218802] 'agreement among raft nodes before linearized reading'  (duration: 136.527609ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:17:00.993243Z","caller":"traceutil/trace.go:171","msg":"trace[2110794990] transaction","detail":"{read_only:false; response_revision:660; number_of_response:1; }","duration":"434.77056ms","start":"2024-12-04T21:17:00.558462Z","end":"2024-12-04T21:17:00.993232Z","steps":["trace[2110794990] 'process raft request'  (duration: 433.108998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:17:00.993372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:17:00.558450Z","time spent":"434.861821ms","remote":"127.0.0.1:45112","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4325,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" mod_revision:624 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" value_size:4259 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-9vlcd\" > >"}
	{"level":"info","ts":"2024-12-04T21:17:20.936585Z","caller":"traceutil/trace.go:171","msg":"trace[20950110] transaction","detail":"{read_only:false; response_revision:677; number_of_response:1; }","duration":"159.874149ms","start":"2024-12-04T21:17:20.776683Z","end":"2024-12-04T21:17:20.936557Z","steps":["trace[20950110] 'process raft request'  (duration: 158.858397ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:26:38.580794Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":900}
	{"level":"info","ts":"2024-12-04T21:26:38.591766Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":900,"took":"10.085105ms","hash":757347438,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2744320,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-04T21:26:38.591896Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":757347438,"revision":900,"compact-revision":-1}
	{"level":"info","ts":"2024-12-04T21:31:38.588624Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1142}
	{"level":"info","ts":"2024-12-04T21:31:38.592425Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1142,"took":"3.440861ms","hash":1760938036,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-04T21:31:38.592472Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1760938036,"revision":1142,"compact-revision":900}
	{"level":"info","ts":"2024-12-04T21:36:38.595880Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1386}
	{"level":"info","ts":"2024-12-04T21:36:38.599845Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1386,"took":"3.53473ms","hash":4140644368,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-12-04T21:36:38.599897Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4140644368,"revision":1386,"compact-revision":1142}
	{"level":"warn","ts":"2024-12-04T21:36:53.297578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.740635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T21:36:53.297812Z","caller":"traceutil/trace.go:171","msg":"trace[442333765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1641; }","duration":"255.075887ms","start":"2024-12-04T21:36:53.042706Z","end":"2024-12-04T21:36:53.297782Z","steps":["trace[442333765] 'range keys from in-memory index tree'  (duration: 254.642143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:36:53.297577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.410684ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T21:36:53.298021Z","caller":"traceutil/trace.go:171","msg":"trace[3729103] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1641; }","duration":"208.905493ms","start":"2024-12-04T21:36:53.089109Z","end":"2024-12-04T21:36:53.298015Z","steps":["trace[3729103] 'range keys from in-memory index tree'  (duration: 208.400857ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:36:55.083941Z","caller":"traceutil/trace.go:171","msg":"trace[2067794242] transaction","detail":"{read_only:false; response_revision:1643; number_of_response:1; }","duration":"152.002183ms","start":"2024-12-04T21:36:54.931915Z","end":"2024-12-04T21:36:55.083918Z","steps":["trace[2067794242] 'process raft request'  (duration: 151.659094ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:37:33 up 21 min,  0 users,  load average: 0.43, 0.18, 0.11
	Linux embed-certs-566991 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] <==
	I1204 21:32:40.773818       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:32:40.773857       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:34:40.775049       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:34:40.775203       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:34:40.775291       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:34:40.775371       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:34:40.776350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:34:40.776435       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:36:39.774780       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:36:39.774962       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1204 21:36:40.777391       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:36:40.777536       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1204 21:36:40.777708       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:36:40.777865       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1204 21:36:40.778675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:36:40.779828       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] <==
	I1204 21:32:14.107438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:32:28.057370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-566991"
	E1204 21:32:43.605827       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:32:44.115238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:32:55.521408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="462.248µs"
	I1204 21:33:07.516378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="227.288µs"
	E1204 21:33:13.613880       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:33:14.121714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:33:43.620402       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:33:44.128483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:34:13.626142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:14.136480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:34:43.634159       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:44.144819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:13.641612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:14.152539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:43.648642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:44.161047       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:13.655699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:14.169709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:43.667283       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:44.177381       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:37:13.674119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:37:14.185007       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:37:33.509145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-566991"
	
	
	==> kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:16:41.211860       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:16:41.224770       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.82"]
	E1204 21:16:41.224852       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:16:41.277478       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:16:41.277534       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:16:41.277569       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:16:41.279819       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:16:41.280073       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:16:41.280102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:16:41.281852       1 config.go:199] "Starting service config controller"
	I1204 21:16:41.281905       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:16:41.281949       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:16:41.281988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:16:41.282363       1 config.go:328] "Starting node config controller"
	I1204 21:16:41.282393       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:16:41.382102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:16:41.382158       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:16:41.382613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] <==
	I1204 21:16:37.937128       1 serving.go:386] Generated self-signed cert in-memory
	W1204 21:16:39.682309       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 21:16:39.682450       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 21:16:39.682462       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 21:16:39.682517       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 21:16:39.771034       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 21:16:39.771076       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:16:39.783396       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 21:16:39.783529       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 21:16:39.783571       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 21:16:39.783585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 21:16:39.884317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:36:24 embed-certs-566991 kubelet[923]: E1204 21:36:24.501697     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:36:25 embed-certs-566991 kubelet[923]: E1204 21:36:25.770689     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348185770298636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:25 embed-certs-566991 kubelet[923]: E1204 21:36:25.771116     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348185770298636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]: E1204 21:36:35.503667     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]: E1204 21:36:35.516377     923 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]: E1204 21:36:35.774576     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348195773921898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:35 embed-certs-566991 kubelet[923]: E1204 21:36:35.774606     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348195773921898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:45 embed-certs-566991 kubelet[923]: E1204 21:36:45.776440     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348205775630330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:45 embed-certs-566991 kubelet[923]: E1204 21:36:45.776873     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348205775630330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:48 embed-certs-566991 kubelet[923]: E1204 21:36:48.501967     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:36:55 embed-certs-566991 kubelet[923]: E1204 21:36:55.778777     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348215778318163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:55 embed-certs-566991 kubelet[923]: E1204 21:36:55.778833     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348215778318163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:02 embed-certs-566991 kubelet[923]: E1204 21:37:02.501128     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:37:05 embed-certs-566991 kubelet[923]: E1204 21:37:05.781183     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348225780801396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:05 embed-certs-566991 kubelet[923]: E1204 21:37:05.781788     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348225780801396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:15 embed-certs-566991 kubelet[923]: E1204 21:37:15.784278     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348235783788166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:15 embed-certs-566991 kubelet[923]: E1204 21:37:15.784323     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348235783788166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:17 embed-certs-566991 kubelet[923]: E1204 21:37:17.501208     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	Dec 04 21:37:25 embed-certs-566991 kubelet[923]: E1204 21:37:25.786540     923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348245785919273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:25 embed-certs-566991 kubelet[923]: E1204 21:37:25.787035     923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348245785919273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:31 embed-certs-566991 kubelet[923]: E1204 21:37:31.501542     923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9vlcd" podUID="1acb08f3-e403-458d-b3e2-e32c07da6afb"
	
	
	==> storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] <==
	I1204 21:16:41.007625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1204 21:17:11.011487       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] <==
	I1204 21:17:11.800491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:17:11.811635       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:17:11.811784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:17:29.216416       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:17:29.216801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d!
	I1204 21:17:29.217240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9bda624b-bdab-4775-8dcf-34ac86d286a1", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d became leader
	I1204 21:17:29.320281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-566991_ab6a3e60-e8fb-47a5-a1a6-40b10be7c98d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-566991 -n embed-certs-566991
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-566991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9vlcd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd: exit status 1 (60.655061ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9vlcd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-566991 describe pod metrics-server-6867b74b74-9vlcd: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (443.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:38:54.367546249 +0000 UTC m=+6383.467274662
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-439360 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.448µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-439360 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-439360 logs -n 25
E1204 21:38:55.445604   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-439360 logs -n 25: (1.130489119s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC | 04 Dec 24 21:36 UTC |
	| start   | -p newest-cni-594114 --memory=2200 --alsologtostderr   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC | 04 Dec 24 21:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	| addons  | enable metrics-server -p newest-cni-594114             | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-594114                  | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-594114 --memory=2200 --alsologtostderr   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:37 UTC | 04 Dec 24 21:37 UTC |
	| image   | newest-cni-594114 image list                           | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:38 UTC | 04 Dec 24 21:38 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:38 UTC | 04 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:38 UTC | 04 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:38 UTC | 04 Dec 24 21:38 UTC |
	| delete  | -p newest-cni-594114                                   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:38 UTC | 04 Dec 24 21:38 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:37:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:37:21.063723   83039 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:37:21.063944   83039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:37:21.063952   83039 out.go:358] Setting ErrFile to fd 2...
	I1204 21:37:21.063956   83039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:37:21.064119   83039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:37:21.064642   83039 out.go:352] Setting JSON to false
	I1204 21:37:21.065537   83039 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8391,"bootTime":1733339850,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:37:21.065638   83039 start.go:139] virtualization: kvm guest
	I1204 21:37:21.067820   83039 out.go:177] * [newest-cni-594114] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:37:21.069113   83039 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:37:21.069160   83039 notify.go:220] Checking for updates...
	I1204 21:37:21.071520   83039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:37:21.072780   83039 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:37:21.073951   83039 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:37:21.075167   83039 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:37:21.076361   83039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:37:21.077872   83039 config.go:182] Loaded profile config "newest-cni-594114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:37:21.078335   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.078404   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.093825   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I1204 21:37:21.094199   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.094728   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.094748   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.095058   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.095282   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.095532   83039 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:37:21.095817   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.095853   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.110864   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
	I1204 21:37:21.111288   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.111737   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.111767   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.112111   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.112285   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.149168   83039 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:37:21.150302   83039 start.go:297] selected driver: kvm2
	I1204 21:37:21.150317   83039 start.go:901] validating driver "kvm2" against &{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:37:21.150476   83039 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:37:21.151276   83039 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:37:21.151351   83039 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:37:21.168036   83039 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:37:21.168566   83039 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 21:37:21.168609   83039 cni.go:84] Creating CNI manager for ""
	I1204 21:37:21.168673   83039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:37:21.168749   83039 start.go:340] cluster config:
	{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:37:21.168924   83039 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:37:21.170671   83039 out.go:177] * Starting "newest-cni-594114" primary control-plane node in "newest-cni-594114" cluster
	I1204 21:37:21.171910   83039 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:37:21.171942   83039 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:37:21.171949   83039 cache.go:56] Caching tarball of preloaded images
	I1204 21:37:21.172028   83039 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:37:21.172038   83039 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:37:21.172135   83039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json ...
	I1204 21:37:21.172310   83039 start.go:360] acquireMachinesLock for newest-cni-594114: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:37:21.172349   83039 start.go:364] duration metric: took 21.646µs to acquireMachinesLock for "newest-cni-594114"
	I1204 21:37:21.172362   83039 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:37:21.172369   83039 fix.go:54] fixHost starting: 
	I1204 21:37:21.172653   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:37:21.172686   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:37:21.187443   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I1204 21:37:21.187930   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:37:21.188358   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:37:21.188381   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:37:21.188711   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:37:21.188868   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:21.189014   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:37:21.190399   83039 fix.go:112] recreateIfNeeded on newest-cni-594114: state=Stopped err=<nil>
	I1204 21:37:21.190422   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	W1204 21:37:21.190576   83039 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:37:21.192200   83039 out.go:177] * Restarting existing kvm2 VM for "newest-cni-594114" ...
	I1204 21:37:21.193397   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Start
	I1204 21:37:21.193540   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring networks are active...
	I1204 21:37:21.194310   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring network default is active
	I1204 21:37:21.194636   83039 main.go:141] libmachine: (newest-cni-594114) Ensuring network mk-newest-cni-594114 is active
	I1204 21:37:21.195001   83039 main.go:141] libmachine: (newest-cni-594114) Getting domain xml...
	I1204 21:37:21.195718   83039 main.go:141] libmachine: (newest-cni-594114) Creating domain...
	I1204 21:37:22.429824   83039 main.go:141] libmachine: (newest-cni-594114) Waiting to get IP...
	I1204 21:37:22.430590   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:22.430998   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:22.431068   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:22.430981   83074 retry.go:31] will retry after 229.283383ms: waiting for machine to come up
	I1204 21:37:22.661494   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:22.661874   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:22.661894   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:22.661837   83074 retry.go:31] will retry after 370.269838ms: waiting for machine to come up
	I1204 21:37:23.033408   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:23.033795   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:23.033823   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:23.033766   83074 retry.go:31] will retry after 414.770193ms: waiting for machine to come up
	I1204 21:37:23.450306   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:23.450784   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:23.450814   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:23.450734   83074 retry.go:31] will retry after 588.127921ms: waiting for machine to come up
	I1204 21:37:24.040389   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:24.040944   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:24.040969   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:24.040892   83074 retry.go:31] will retry after 646.42402ms: waiting for machine to come up
	I1204 21:37:24.688457   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:24.689037   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:24.689065   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:24.688969   83074 retry.go:31] will retry after 683.032614ms: waiting for machine to come up
	I1204 21:37:25.373688   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:25.374074   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:25.374096   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:25.374038   83074 retry.go:31] will retry after 883.64786ms: waiting for machine to come up
	I1204 21:37:26.259307   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:26.259867   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:26.259904   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:26.259824   83074 retry.go:31] will retry after 929.533809ms: waiting for machine to come up
	I1204 21:37:27.190699   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:27.191170   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:27.191217   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:27.191128   83074 retry.go:31] will retry after 1.284074253s: waiting for machine to come up
	I1204 21:37:28.477854   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:28.478316   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:28.478345   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:28.478262   83074 retry.go:31] will retry after 1.486229177s: waiting for machine to come up
	I1204 21:37:29.967041   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:29.967572   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:29.967601   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:29.967535   83074 retry.go:31] will retry after 1.93353435s: waiting for machine to come up
	I1204 21:37:31.902901   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:31.903440   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:31.903485   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:31.903430   83074 retry.go:31] will retry after 3.184247864s: waiting for machine to come up
	I1204 21:37:35.091260   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:35.091739   83039 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:37:35.091771   83039 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:37:35.091704   83074 retry.go:31] will retry after 4.39259693s: waiting for machine to come up
	I1204 21:37:39.488418   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.488878   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has current primary IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.488911   83039 main.go:141] libmachine: (newest-cni-594114) Found IP for machine: 192.168.72.161
	I1204 21:37:39.488923   83039 main.go:141] libmachine: (newest-cni-594114) Reserving static IP address...
	I1204 21:37:39.489454   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "newest-cni-594114", mac: "52:54:00:b8:cc:25", ip: "192.168.72.161"} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.489481   83039 main.go:141] libmachine: (newest-cni-594114) DBG | skip adding static IP to network mk-newest-cni-594114 - found existing host DHCP lease matching {name: "newest-cni-594114", mac: "52:54:00:b8:cc:25", ip: "192.168.72.161"}
	I1204 21:37:39.489490   83039 main.go:141] libmachine: (newest-cni-594114) Reserved static IP address: 192.168.72.161
	I1204 21:37:39.489505   83039 main.go:141] libmachine: (newest-cni-594114) Waiting for SSH to be available...
	I1204 21:37:39.489518   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Getting to WaitForSSH function...
	I1204 21:37:39.491751   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.492073   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.492103   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.492181   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Using SSH client type: external
	I1204 21:37:39.492236   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa (-rw-------)
	I1204 21:37:39.492275   83039 main.go:141] libmachine: (newest-cni-594114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:37:39.492293   83039 main.go:141] libmachine: (newest-cni-594114) DBG | About to run SSH command:
	I1204 21:37:39.492309   83039 main.go:141] libmachine: (newest-cni-594114) DBG | exit 0
	I1204 21:37:39.619533   83039 main.go:141] libmachine: (newest-cni-594114) DBG | SSH cmd err, output: <nil>: 
	I1204 21:37:39.619927   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetConfigRaw
	I1204 21:37:39.620645   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:37:39.623264   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.623710   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.623752   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.623939   83039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json ...
	I1204 21:37:39.624134   83039 machine.go:93] provisionDockerMachine start ...
	I1204 21:37:39.624154   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:39.624355   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:39.626846   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.627215   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.627242   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.627356   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:39.627560   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.627721   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.627846   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:39.628023   83039 main.go:141] libmachine: Using SSH client type: native
	I1204 21:37:39.628210   83039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:37:39.628220   83039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:37:39.735454   83039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:37:39.735488   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:37:39.735758   83039 buildroot.go:166] provisioning hostname "newest-cni-594114"
	I1204 21:37:39.735784   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:37:39.735999   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:39.738266   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.738604   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.738632   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.738839   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:39.739040   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.739239   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.739396   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:39.739563   83039 main.go:141] libmachine: Using SSH client type: native
	I1204 21:37:39.739732   83039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:37:39.739746   83039 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-594114 && echo "newest-cni-594114" | sudo tee /etc/hostname
	I1204 21:37:39.861973   83039 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-594114
	
	I1204 21:37:39.862019   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:39.865032   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.865361   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.865403   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.865592   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:39.865803   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.865958   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:39.866118   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:39.866292   83039 main.go:141] libmachine: Using SSH client type: native
	I1204 21:37:39.866535   83039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:37:39.866560   83039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-594114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-594114/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-594114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:37:39.983812   83039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:37:39.983846   83039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:37:39.983869   83039 buildroot.go:174] setting up certificates
	I1204 21:37:39.983882   83039 provision.go:84] configureAuth start
	I1204 21:37:39.983895   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:37:39.984199   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:37:39.986800   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.987154   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.987177   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.987325   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:39.989413   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.989758   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:39.989782   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:39.989923   83039 provision.go:143] copyHostCerts
	I1204 21:37:39.989995   83039 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:37:39.990005   83039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:37:39.990071   83039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:37:39.990152   83039 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:37:39.990160   83039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:37:39.990184   83039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:37:39.990234   83039 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:37:39.990241   83039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:37:39.990261   83039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:37:39.990349   83039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-594114 san=[127.0.0.1 192.168.72.161 localhost minikube newest-cni-594114]
	I1204 21:37:40.150765   83039 provision.go:177] copyRemoteCerts
	I1204 21:37:40.150834   83039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:37:40.150858   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.153491   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.153835   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.153859   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.154038   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.154243   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.154424   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.154540   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:37:40.242002   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:37:40.265104   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:37:40.287238   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:37:40.308457   83039 provision.go:87] duration metric: took 324.563403ms to configureAuth
	I1204 21:37:40.308480   83039 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:37:40.308650   83039 config.go:182] Loaded profile config "newest-cni-594114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:37:40.308710   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.311257   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.311656   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.311685   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.311841   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.312038   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.312237   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.312371   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.312526   83039 main.go:141] libmachine: Using SSH client type: native
	I1204 21:37:40.312735   83039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:37:40.312754   83039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:37:40.537770   83039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:37:40.537796   83039 machine.go:96] duration metric: took 913.649725ms to provisionDockerMachine
	I1204 21:37:40.537810   83039 start.go:293] postStartSetup for "newest-cni-594114" (driver="kvm2")
	I1204 21:37:40.537825   83039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:37:40.537846   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:40.538194   83039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:37:40.538231   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.541194   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.541616   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.541647   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.541861   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.542084   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.542272   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.542441   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:37:40.625739   83039 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:37:40.629657   83039 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:37:40.629687   83039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:37:40.629753   83039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:37:40.629850   83039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:37:40.629964   83039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:37:40.639220   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:37:40.661765   83039 start.go:296] duration metric: took 123.941418ms for postStartSetup
	I1204 21:37:40.661807   83039 fix.go:56] duration metric: took 19.489436625s for fixHost
	I1204 21:37:40.661831   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.664337   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.664642   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.664670   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.664803   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.664995   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.665255   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.665374   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.665562   83039 main.go:141] libmachine: Using SSH client type: native
	I1204 21:37:40.665740   83039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:37:40.665753   83039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:37:40.771883   83039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733348260.746416551
	
	I1204 21:37:40.771911   83039 fix.go:216] guest clock: 1733348260.746416551
	I1204 21:37:40.771920   83039 fix.go:229] Guest: 2024-12-04 21:37:40.746416551 +0000 UTC Remote: 2024-12-04 21:37:40.661813691 +0000 UTC m=+19.634893594 (delta=84.60286ms)
	I1204 21:37:40.771946   83039 fix.go:200] guest clock delta is within tolerance: 84.60286ms
	I1204 21:37:40.771953   83039 start.go:83] releasing machines lock for "newest-cni-594114", held for 19.599595021s
	I1204 21:37:40.771977   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:40.772242   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:37:40.774601   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.774944   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.774968   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.775140   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:40.775596   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:40.775749   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:37:40.775849   83039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:37:40.775887   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.775914   83039 ssh_runner.go:195] Run: cat /version.json
	I1204 21:37:40.775934   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:37:40.778531   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.778790   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.778818   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.778933   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.778983   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.779151   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.779353   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:40.779392   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:40.779358   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.779532   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:37:40.779617   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:37:40.779688   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:37:40.779850   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:37:40.779996   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:37:40.855500   83039 ssh_runner.go:195] Run: systemctl --version
	I1204 21:37:40.882628   83039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:37:41.020264   83039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:37:41.025754   83039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:37:41.025811   83039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:37:41.040018   83039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:37:41.040040   83039 start.go:495] detecting cgroup driver to use...
	I1204 21:37:41.040084   83039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:37:41.054474   83039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:37:41.067264   83039 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:37:41.067318   83039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:37:41.080025   83039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:37:41.093067   83039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:37:41.200434   83039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:37:41.337724   83039 docker.go:233] disabling docker service ...
	I1204 21:37:41.337799   83039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:37:41.352026   83039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:37:41.365008   83039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:37:41.496467   83039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:37:41.606291   83039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:37:41.619825   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:37:41.637316   83039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:37:41.637374   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.646808   83039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:37:41.646871   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.656755   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.666609   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.676430   83039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:37:41.686256   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.695711   83039 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.711837   83039 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:37:41.722770   83039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:37:41.731512   83039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:37:41.731578   83039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:37:41.746284   83039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:37:41.756459   83039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:37:41.872403   83039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:37:41.960683   83039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:37:41.960746   83039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:37:41.965596   83039 start.go:563] Will wait 60s for crictl version
	I1204 21:37:41.965656   83039 ssh_runner.go:195] Run: which crictl
	I1204 21:37:41.969128   83039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:37:42.005583   83039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:37:42.005659   83039 ssh_runner.go:195] Run: crio --version
	I1204 21:37:42.032405   83039 ssh_runner.go:195] Run: crio --version
	I1204 21:37:42.059152   83039 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:37:42.060319   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:37:42.063192   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:42.063630   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:37:42.063662   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:37:42.063840   83039 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:37:42.067709   83039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:37:42.081111   83039 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1204 21:37:42.082288   83039 kubeadm.go:883] updating cluster {Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:37:42.082419   83039 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:37:42.082477   83039 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:37:42.118527   83039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:37:42.118616   83039 ssh_runner.go:195] Run: which lz4
	I1204 21:37:42.122287   83039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:37:42.126036   83039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:37:42.126059   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:37:43.371802   83039 crio.go:462] duration metric: took 1.249550511s to copy over tarball
	I1204 21:37:43.371874   83039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:37:45.409410   83039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.037508268s)
	I1204 21:37:45.409442   83039 crio.go:469] duration metric: took 2.037611548s to extract the tarball
	I1204 21:37:45.409451   83039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:37:45.446083   83039 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:37:45.490199   83039 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:37:45.490225   83039 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:37:45.490233   83039 kubeadm.go:934] updating node { 192.168.72.161 8443 v1.31.2 crio true true} ...
	I1204 21:37:45.490356   83039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-594114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:37:45.490499   83039 ssh_runner.go:195] Run: crio config
	I1204 21:37:45.532801   83039 cni.go:84] Creating CNI manager for ""
	I1204 21:37:45.532823   83039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:37:45.532832   83039 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1204 21:37:45.532852   83039 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.161 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-594114 NodeName:newest-cni-594114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:37:45.532970   83039 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-594114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.161"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.161"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:37:45.533027   83039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:37:45.543938   83039 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:37:45.544022   83039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:37:45.553616   83039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1204 21:37:45.569824   83039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:37:45.585516   83039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1204 21:37:45.602083   83039 ssh_runner.go:195] Run: grep 192.168.72.161	control-plane.minikube.internal$ /etc/hosts
	I1204 21:37:45.605908   83039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:37:45.617650   83039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:37:45.730568   83039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:37:45.751587   83039 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114 for IP: 192.168.72.161
	I1204 21:37:45.751618   83039 certs.go:194] generating shared ca certs ...
	I1204 21:37:45.751640   83039 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:37:45.751827   83039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:37:45.751884   83039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:37:45.751899   83039 certs.go:256] generating profile certs ...
	I1204 21:37:45.752004   83039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.key
	I1204 21:37:45.752085   83039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key.19fd90cf
	I1204 21:37:45.752149   83039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key
	I1204 21:37:45.752278   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:37:45.752315   83039 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:37:45.752325   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:37:45.752347   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:37:45.752368   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:37:45.752404   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:37:45.752467   83039 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:37:45.753245   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:37:45.812465   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:37:45.849015   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:37:45.886154   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:37:45.913639   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:37:45.949709   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:37:45.972404   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:37:45.994293   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:37:46.015944   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:37:46.037550   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:37:46.058921   83039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:37:46.081356   83039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:37:46.097607   83039 ssh_runner.go:195] Run: openssl version
	I1204 21:37:46.103404   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:37:46.114065   83039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:37:46.118507   83039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:37:46.118570   83039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:37:46.124386   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:37:46.134944   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:37:46.145440   83039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:37:46.149625   83039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:37:46.149682   83039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:37:46.154971   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:37:46.164623   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:37:46.174258   83039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:37:46.178168   83039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:37:46.178203   83039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:37:46.183519   83039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:37:46.193493   83039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:37:46.197291   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:37:46.202403   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:37:46.207421   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:37:46.212623   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:37:46.217741   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:37:46.222861   83039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:37:46.228028   83039 kubeadm.go:392] StartCluster: {Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:37:46.228125   83039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:37:46.228160   83039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:37:46.261870   83039 cri.go:89] found id: ""
	I1204 21:37:46.261931   83039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:37:46.271695   83039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:37:46.271715   83039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:37:46.271761   83039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:37:46.281122   83039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:37:46.281708   83039 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-594114" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:37:46.281953   83039 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-594114" cluster setting kubeconfig missing "newest-cni-594114" context setting]
	I1204 21:37:46.282341   83039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:37:46.283603   83039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:37:46.292298   83039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.161
	I1204 21:37:46.292327   83039 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:37:46.292339   83039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:37:46.292390   83039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:37:46.324310   83039 cri.go:89] found id: ""
	I1204 21:37:46.324386   83039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:37:46.340233   83039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:37:46.348911   83039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:37:46.348930   83039 kubeadm.go:157] found existing configuration files:
	
	I1204 21:37:46.348975   83039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:37:46.356880   83039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:37:46.356914   83039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:37:46.365146   83039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:37:46.373225   83039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:37:46.373277   83039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:37:46.381864   83039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:37:46.390255   83039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:37:46.390305   83039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:37:46.399223   83039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:37:46.407701   83039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:37:46.407746   83039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:37:46.416451   83039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:37:46.425417   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:37:46.536588   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:37:48.125066   83039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.588442257s)
	I1204 21:37:48.125101   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:37:48.355603   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:37:48.424032   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:37:48.524826   83039 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:37:48.524914   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:37:49.025850   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:37:49.525418   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:37:50.025084   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:37:50.525161   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:37:50.538532   83039 api_server.go:72] duration metric: took 2.013704253s to wait for apiserver process to appear ...
	I1204 21:37:50.538562   83039 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:37:50.538580   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:37:55.539457   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:37:55.539533   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:00.539782   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:00.539822   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:05.540335   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:05.540375   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:10.541446   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:10.541490   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:10.924838   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": read tcp 192.168.72.1:43328->192.168.72.161:8443: read: connection reset by peer
	I1204 21:38:11.039062   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:11.039668   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": dial tcp 192.168.72.161:8443: connect: connection refused
	I1204 21:38:11.538669   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:11.539280   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": dial tcp 192.168.72.161:8443: connect: connection refused
	I1204 21:38:12.038894   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:17.039780   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:17.039821   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:22.040158   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:22.040202   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:27.040711   83039 api_server.go:269] stopped: https://192.168.72.161:8443/healthz: Get "https://192.168.72.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 21:38:27.040753   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:27.817320   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:38:27.817349   83039 api_server.go:103] status: https://192.168.72.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:38:27.817365   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:27.945319   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:38:27.945355   83039 api_server.go:103] status: https://192.168.72.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:38:28.039638   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:28.043761   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:38:28.043790   83039 api_server.go:103] status: https://192.168.72.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:38:28.539396   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:28.544103   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:38:28.544132   83039 api_server.go:103] status: https://192.168.72.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:38:29.038672   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:29.044458   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:38:29.044483   83039 api_server.go:103] status: https://192.168.72.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:38:29.538752   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:29.542612   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 200:
	ok
	I1204 21:38:29.548534   83039 api_server.go:141] control plane version: v1.31.2
	I1204 21:38:29.548562   83039 api_server.go:131] duration metric: took 39.009992508s to wait for apiserver health ...
	I1204 21:38:29.548576   83039 cni.go:84] Creating CNI manager for ""
	I1204 21:38:29.548583   83039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:38:29.550476   83039 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:38:29.551865   83039 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:38:29.562430   83039 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:38:29.581751   83039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:38:29.591742   83039 system_pods.go:59] 9 kube-system pods found
	I1204 21:38:29.591776   83039 system_pods.go:61] "coredns-7c65d6cfc9-gmgrx" [de6211ff-b549-42d3-8d02-cdf56203fbab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:38:29.591786   83039 system_pods.go:61] "coredns-7c65d6cfc9-rxh2v" [ba642ed0-da14-4110-adf3-533d36ee54aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:38:29.591795   83039 system_pods.go:61] "etcd-newest-cni-594114" [ee7216f2-d489-4697-b52d-e66ae014529c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:38:29.591805   83039 system_pods.go:61] "kube-apiserver-newest-cni-594114" [08ebfc45-21fa-4d64-b1d7-744fb0c914ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:38:29.591812   83039 system_pods.go:61] "kube-controller-manager-newest-cni-594114" [433e5f8a-f386-4143-887a-95adf901860f] Running
	I1204 21:38:29.591819   83039 system_pods.go:61] "kube-proxy-qtb8n" [a60e06c4-953b-4eaa-8cf4-49ee97e8e69c] Running
	I1204 21:38:29.591831   83039 system_pods.go:61] "kube-scheduler-newest-cni-594114" [6ca5d764-5a4b-466c-8742-fc7208726982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:38:29.591840   83039 system_pods.go:61] "metrics-server-6867b74b74-vwv82" [78df55bb-c2d1-4e2b-83df-23a60b63ac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:38:29.591846   83039 system_pods.go:61] "storage-provisioner" [662af7fc-050a-401b-9149-57ce615043ef] Running
	I1204 21:38:29.591857   83039 system_pods.go:74] duration metric: took 10.089ms to wait for pod list to return data ...
	I1204 21:38:29.591867   83039 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:38:29.595843   83039 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:38:29.595869   83039 node_conditions.go:123] node cpu capacity is 2
	I1204 21:38:29.595884   83039 node_conditions.go:105] duration metric: took 4.010223ms to run NodePressure ...
	I1204 21:38:29.595903   83039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:38:29.864367   83039 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:38:29.875590   83039 ops.go:34] apiserver oom_adj: -16
	I1204 21:38:29.875611   83039 kubeadm.go:597] duration metric: took 43.603888898s to restartPrimaryControlPlane
	I1204 21:38:29.875622   83039 kubeadm.go:394] duration metric: took 43.647598938s to StartCluster
	I1204 21:38:29.875643   83039 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:38:29.875724   83039 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:38:29.876634   83039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:38:29.876831   83039 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:38:29.876928   83039 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:38:29.877029   83039 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-594114"
	I1204 21:38:29.877042   83039 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-594114"
	W1204 21:38:29.877050   83039 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:38:29.877052   83039 addons.go:69] Setting default-storageclass=true in profile "newest-cni-594114"
	I1204 21:38:29.877071   83039 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-594114"
	I1204 21:38:29.877083   83039 host.go:66] Checking if "newest-cni-594114" exists ...
	I1204 21:38:29.877108   83039 config.go:182] Loaded profile config "newest-cni-594114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:38:29.877109   83039 addons.go:69] Setting metrics-server=true in profile "newest-cni-594114"
	I1204 21:38:29.877139   83039 addons.go:234] Setting addon metrics-server=true in "newest-cni-594114"
	I1204 21:38:29.877128   83039 addons.go:69] Setting dashboard=true in profile "newest-cni-594114"
	W1204 21:38:29.877148   83039 addons.go:243] addon metrics-server should already be in state true
	I1204 21:38:29.877159   83039 addons.go:234] Setting addon dashboard=true in "newest-cni-594114"
	W1204 21:38:29.877173   83039 addons.go:243] addon dashboard should already be in state true
	I1204 21:38:29.877193   83039 host.go:66] Checking if "newest-cni-594114" exists ...
	I1204 21:38:29.877206   83039 host.go:66] Checking if "newest-cni-594114" exists ...
	I1204 21:38:29.877464   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.877493   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.877466   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.877573   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.877632   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.877637   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.877668   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.877676   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.878442   83039 out.go:177] * Verifying Kubernetes components...
	I1204 21:38:29.879635   83039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:38:29.892977   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
	I1204 21:38:29.893569   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.894065   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.894090   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.894501   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.894769   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:38:29.897372   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45073
	I1204 21:38:29.897642   83039 addons.go:234] Setting addon default-storageclass=true in "newest-cni-594114"
	W1204 21:38:29.897663   83039 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:38:29.897688   83039 host.go:66] Checking if "newest-cni-594114" exists ...
	I1204 21:38:29.897726   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.898031   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.898064   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.898179   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.898194   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.898547   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.898995   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.899028   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.909400   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I1204 21:38:29.909803   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44625
	I1204 21:38:29.909950   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.910259   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.910491   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.910512   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.910731   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.910752   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.910860   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.911390   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.911426   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.911441   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.911998   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.912034   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.915265   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I1204 21:38:29.915965   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.916531   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.916548   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.916794   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I1204 21:38:29.916953   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.917147   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:38:29.917217   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.917677   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.917692   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.918099   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.918801   83039 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:38:29.918834   83039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:38:29.919011   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:38:29.921044   83039 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:38:29.922537   83039 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:38:29.922556   83039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:38:29.922573   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:38:29.926225   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.926822   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:38:29.926843   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.927036   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:38:29.927231   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:38:29.927389   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:38:29.927598   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:38:29.929569   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I1204 21:38:29.929971   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.930313   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.930329   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.930635   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.930799   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:38:29.930906   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I1204 21:38:29.931452   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.931919   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.931941   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.932188   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:38:29.932251   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.932580   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:38:29.934077   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:38:29.935547   83039 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:38:29.935646   83039 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1204 21:38:29.936592   83039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I1204 21:38:29.936878   83039 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:38:29.936921   83039 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:38:29.936939   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:38:29.936966   83039 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:38:29.937456   83039 main.go:141] libmachine: Using API Version  1
	I1204 21:38:29.937483   83039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:38:29.937844   83039 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:38:29.937904   83039 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1204 21:38:29.938070   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:38:29.938883   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1204 21:38:29.938901   83039 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1204 21:38:29.938918   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:38:29.939816   83039 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:38:29.940023   83039 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:38:29.940045   83039 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:38:29.940061   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:38:29.940449   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.940948   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:38:29.940968   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.941110   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:38:29.941256   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:38:29.941392   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:38:29.941531   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:38:29.942361   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.942688   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:38:29.942709   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.942882   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:38:29.943053   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:38:29.943170   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:38:29.943281   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:38:29.943576   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.943739   83039 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:37:32 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:38:29.943760   83039 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:38:29.943839   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:38:29.943981   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:38:29.944107   83039 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:38:29.944241   83039 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:38:30.042001   83039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:38:30.060812   83039 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:38:30.060878   83039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:38:30.076574   83039 api_server.go:72] duration metric: took 199.71334ms to wait for apiserver process to appear ...
	I1204 21:38:30.076601   83039 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:38:30.076618   83039 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1204 21:38:30.081766   83039 api_server.go:279] https://192.168.72.161:8443/healthz returned 200:
	ok
	I1204 21:38:30.082619   83039 api_server.go:141] control plane version: v1.31.2
	I1204 21:38:30.082638   83039 api_server.go:131] duration metric: took 6.031679ms to wait for apiserver health ...
	I1204 21:38:30.082645   83039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:38:30.088281   83039 system_pods.go:59] 9 kube-system pods found
	I1204 21:38:30.088309   83039 system_pods.go:61] "coredns-7c65d6cfc9-gmgrx" [de6211ff-b549-42d3-8d02-cdf56203fbab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:38:30.088316   83039 system_pods.go:61] "coredns-7c65d6cfc9-rxh2v" [ba642ed0-da14-4110-adf3-533d36ee54aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:38:30.088325   83039 system_pods.go:61] "etcd-newest-cni-594114" [ee7216f2-d489-4697-b52d-e66ae014529c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:38:30.088335   83039 system_pods.go:61] "kube-apiserver-newest-cni-594114" [08ebfc45-21fa-4d64-b1d7-744fb0c914ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:38:30.088347   83039 system_pods.go:61] "kube-controller-manager-newest-cni-594114" [433e5f8a-f386-4143-887a-95adf901860f] Running
	I1204 21:38:30.088358   83039 system_pods.go:61] "kube-proxy-qtb8n" [a60e06c4-953b-4eaa-8cf4-49ee97e8e69c] Running
	I1204 21:38:30.088366   83039 system_pods.go:61] "kube-scheduler-newest-cni-594114" [6ca5d764-5a4b-466c-8742-fc7208726982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:38:30.088374   83039 system_pods.go:61] "metrics-server-6867b74b74-vwv82" [78df55bb-c2d1-4e2b-83df-23a60b63ac9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:38:30.088381   83039 system_pods.go:61] "storage-provisioner" [662af7fc-050a-401b-9149-57ce615043ef] Running
	I1204 21:38:30.088388   83039 system_pods.go:74] duration metric: took 5.737328ms to wait for pod list to return data ...
	I1204 21:38:30.088397   83039 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:38:30.090909   83039 default_sa.go:45] found service account: "default"
	I1204 21:38:30.090927   83039 default_sa.go:55] duration metric: took 2.524546ms for default service account to be created ...
	I1204 21:38:30.090937   83039 kubeadm.go:582] duration metric: took 214.083299ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 21:38:30.090959   83039 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:38:30.093237   83039 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:38:30.093257   83039 node_conditions.go:123] node cpu capacity is 2
	I1204 21:38:30.093266   83039 node_conditions.go:105] duration metric: took 2.302831ms to run NodePressure ...
	I1204 21:38:30.093276   83039 start.go:241] waiting for startup goroutines ...
	I1204 21:38:30.128009   83039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:38:30.137421   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1204 21:38:30.137441   83039 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1204 21:38:30.174825   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1204 21:38:30.174846   83039 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1204 21:38:30.195674   83039 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:38:30.195692   83039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:38:30.210517   83039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:38:30.238414   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1204 21:38:30.238442   83039 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1204 21:38:30.256295   83039 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:38:30.256324   83039 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:38:30.282176   83039 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:38:30.282208   83039 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:38:30.301913   83039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:38:30.310680   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1204 21:38:30.310703   83039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1204 21:38:30.399292   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1204 21:38:30.399317   83039 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1204 21:38:30.485292   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1204 21:38:30.485323   83039 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1204 21:38:30.519753   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:30.519775   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:30.520070   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:30.520084   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:30.520092   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:30.520099   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:30.520338   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:30.520357   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:30.520367   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:30.532083   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:30.532104   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:30.532414   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:30.532438   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:30.532451   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:30.542935   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1204 21:38:30.542953   83039 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1204 21:38:30.560850   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1204 21:38:30.560871   83039 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1204 21:38:30.611712   83039 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 21:38:30.611743   83039 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1204 21:38:30.681041   83039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 21:38:31.466879   83039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256324344s)
	I1204 21:38:31.466935   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.466955   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.467389   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.467444   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.467460   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.467471   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.467413   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.467699   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.467715   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.467714   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.680224   83039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.378263758s)
	I1204 21:38:31.680286   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.680301   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.680626   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.680670   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.680677   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.680686   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.680694   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.681024   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.682516   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.682537   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.682553   83039 addons.go:475] Verifying addon metrics-server=true in "newest-cni-594114"
	I1204 21:38:31.945726   83039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.264631337s)
	I1204 21:38:31.945808   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.945828   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.946117   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.946166   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.946184   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.946199   83039 main.go:141] libmachine: Making call to close driver server
	I1204 21:38:31.946212   83039 main.go:141] libmachine: (newest-cni-594114) Calling .Close
	I1204 21:38:31.946489   83039 main.go:141] libmachine: (newest-cni-594114) DBG | Closing plugin on server side
	I1204 21:38:31.946492   83039 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:38:31.946510   83039 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:38:31.948214   83039 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-594114 addons enable metrics-server
	
	I1204 21:38:31.949962   83039 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1204 21:38:31.951412   83039 addons.go:510] duration metric: took 2.074484377s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1204 21:38:31.951446   83039 start.go:246] waiting for cluster config update ...
	I1204 21:38:31.951456   83039 start.go:255] writing updated cluster config ...
	I1204 21:38:31.951682   83039 ssh_runner.go:195] Run: rm -f paused
	I1204 21:38:32.001336   83039 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:38:32.002760   83039 out.go:177] * Done! kubectl is now configured to use "newest-cni-594114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.945964388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348334945938607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=027bdc91-2bae-4195-a5cf-89bf9e1eea69 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.946448615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7bf9141-5bec-46a1-97cb-3994f0b480f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.946497971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7bf9141-5bec-46a1-97cb-3994f0b480f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.946700672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7bf9141-5bec-46a1-97cb-3994f0b480f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.981999156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dba6d100-7c15-4161-aa0f-cdde11ef12b5 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.982072581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dba6d100-7c15-4161-aa0f-cdde11ef12b5 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.983295144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2645313-6e4f-4c34-917c-462487b9013f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.983703128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348334983679371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2645313-6e4f-4c34-917c-462487b9013f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.984588648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5147cc8e-7656-4f2f-9d2b-7a104031c36b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.984644282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5147cc8e-7656-4f2f-9d2b-7a104031c36b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:54 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:54.984862783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5147cc8e-7656-4f2f-9d2b-7a104031c36b name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.021384467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca208185-d49b-4fd0-9bf3-d5c1fc24b4a2 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.021454222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca208185-d49b-4fd0-9bf3-d5c1fc24b4a2 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.022830080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ecaff8e-5163-4a86-a8c3-eee18421085c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.023358829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348335023335359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ecaff8e-5163-4a86-a8c3-eee18421085c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.023836161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0d6c673-8e08-4dbf-ad44-6cbda5e6efa9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.023889354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0d6c673-8e08-4dbf-ad44-6cbda5e6efa9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.024118979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0d6c673-8e08-4dbf-ad44-6cbda5e6efa9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.053644103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecef4194-1529-43a5-8de4-1da8a9fdf9cd name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.053702384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecef4194-1529-43a5-8de4-1da8a9fdf9cd name=/runtime.v1.RuntimeService/Version
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.054762058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f61b7296-3e24-46e8-8881-aba86d11defa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.055121658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348335055101982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f61b7296-3e24-46e8-8881-aba86d11defa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.055612129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6217086-fac7-4b6d-b427-a9fa5d9513f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.055658377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6217086-fac7-4b6d-b427-a9fa5d9513f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:38:55 default-k8s-diff-port-439360 crio[721]: time="2024-12-04 21:38:55.055863136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4,PodSandboxId:e98dddcdd6df6a1723043e75f83e721b5c770087066f00ee76a708d4e7943533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347341089628235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac88490-a422-4889-bff4-b180638846cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d,PodSandboxId:b4dfa190fa76c60b53052e06887b758c86afa34d5b0d314a142e729955f170c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340599032626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4jmcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d193d2-0374-43a5-addd-96cdee963cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f,PodSandboxId:5d685138375a1126e356d5eff0828a35c11abe804626d57bdb7c83beab274604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347340425692275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tzhgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: aafae17b-5a47-4a70-bc80-94cbbca8fe38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6,PodSandboxId:7735f3c0adf975bd2acd1314e1a07b2beaafb804dcd939e0e3ce492c40aba1ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733347339465427901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hclwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eef6c093-2186-437b-9a13-c8bafbcb4f78,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509,PodSandboxId:d83f071b99d12b587b4c154184e04074777a233d8364e3a2a3a469d706e661db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347328647484457,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b844ee0f7c72991de7f25ef1127420f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf,PodSandboxId:6b1ae26a6157ab1e9d4f07bb844ed79c5a36298e4868f9f9841d1df13ca38a4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347328611552822,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6619af53a575347ee4090aa09ff02577,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617,PodSandboxId:47f4c37838aa1e38ab470f7d42cecd139b1481b5e0245991cc33e6ab5c143b81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733347328555272713,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e08b7ba36a6756c31ffcb3d2a3e57be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b,PodSandboxId:07bb53eab60e2439c9149e8d42131068a84a531573873eac7fd7d8b26962d9e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347328544280300,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa,PodSandboxId:5d3d425a253a991f07d63e416babb3d01f15be3ca34e0e782076be57a2b7329c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347040943549942,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-439360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333e66bdb021280ce494c1aae508f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6217086-fac7-4b6d-b427-a9fa5d9513f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af3eab35b327d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   e98dddcdd6df6       storage-provisioner
	2a7a1c9e3c85a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   b4dfa190fa76c       coredns-7c65d6cfc9-4jmcl
	297685b8e381c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   5d685138375a1       coredns-7c65d6cfc9-tzhgh
	bdc56ecdf83e3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   16 minutes ago      Running             kube-proxy                0                   7735f3c0adf97       kube-proxy-hclwt
	4c7302ea43e02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   d83f071b99d12       etcd-default-k8s-diff-port-439360
	64491d8a2a165       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   6b1ae26a6157a       kube-controller-manager-default-k8s-diff-port-439360
	00b75c8d0ab80       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   47f4c37838aa1       kube-scheduler-default-k8s-diff-port-439360
	8779528fd3a8e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   07bb53eab60e2       kube-apiserver-default-k8s-diff-port-439360
	fb96ebc2d974c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            1                   5d3d425a253a9       kube-apiserver-default-k8s-diff-port-439360
	
	
	==> coredns [297685b8e381cf79cdfb5b72fbd7255b3be356206edab75a1d7b64b3e623876f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2a7a1c9e3c85a6639f5c80060b2e8bdc36cec8cd9bb901eeb6422027cee9cb9d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-439360
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-439360
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=default-k8s-diff-port-439360
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-439360
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:38:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:37:41 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:37:41 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:37:41 +0000   Wed, 04 Dec 2024 21:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:37:41 +0000   Wed, 04 Dec 2024 21:22:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.171
	  Hostname:    default-k8s-diff-port-439360
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92c5859abe734eb49a48473826e74840
	  System UUID:                92c5859a-be73-4eb4-9a48-473826e74840
	  Boot ID:                    160de329-24a2-43ba-a321-6907754d7911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4jmcl                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-tzhgh                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-439360                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-439360             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-439360    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-hclwt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-439360             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-v88hj                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-439360 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-439360 event: Registered Node default-k8s-diff-port-439360 in Controller
	
	
	==> dmesg <==
	[  +0.056090] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 21:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.003892] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635306] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.140763] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.066784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076258] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.213791] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.112861] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.292620] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.080350] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +1.613168] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +0.067445] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.540578] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.969077] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 4 21:22] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.077657] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.999173] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.101739] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.800897] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +0.085626] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 4 21:23] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [4c7302ea43e0267d253344635d773d37674ec7f2e4ba6a8dca72d9587a2d6509] <==
	{"level":"info","ts":"2024-12-04T21:22:09.238464Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:09.240533Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.243080Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:09.245386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.171:2379"}
	{"level":"info","ts":"2024-12-04T21:22:09.247508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:09.249265Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d60dacabf64a723e","local-member-id":"a784f2475f6ae727","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.249589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.249652Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:09.250203Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:09.252401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T21:22:09.268204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:09.270199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:32:09.743392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-12-04T21:32:09.752018Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"8.280013ms","hash":1436228631,"current-db-size-bytes":2293760,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2293760,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-04T21:32:09.752114Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1436228631,"revision":683,"compact-revision":-1}
	{"level":"warn","ts":"2024-12-04T21:36:53.433502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.492235ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T21:36:53.434011Z","caller":"traceutil/trace.go:171","msg":"trace[1456293795] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1156; }","duration":"313.102511ms","start":"2024-12-04T21:36:53.120871Z","end":"2024-12-04T21:36:53.433974Z","steps":["trace[1456293795] 'range keys from in-memory index tree'  (duration: 312.466516ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:36:53.434628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.056619ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16656444008792571005 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.171\" mod_revision:1149 > success:<request_put:<key:\"/registry/masterleases/192.168.50.171\" value_size:67 lease:7433071971937795195 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.171\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T21:36:53.434747Z","caller":"traceutil/trace.go:171","msg":"trace[1581886594] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"382.349148ms","start":"2024-12-04T21:36:53.052377Z","end":"2024-12-04T21:36:53.434726Z","steps":["trace[1581886594] 'process raft request'  (duration: 124.147207ms)","trace[1581886594] 'compare'  (duration: 256.799096ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T21:36:53.434804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:36:53.052360Z","time spent":"382.415515ms","remote":"127.0.0.1:55880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.171\" mod_revision:1149 > success:<request_put:<key:\"/registry/masterleases/192.168.50.171\" value_size:67 lease:7433071971937795195 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.171\" > >"}
	{"level":"info","ts":"2024-12-04T21:36:53.824738Z","caller":"traceutil/trace.go:171","msg":"trace[120890188] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"124.442874ms","start":"2024-12-04T21:36:53.700278Z","end":"2024-12-04T21:36:53.824721Z","steps":["trace[120890188] 'process raft request'  (duration: 124.286166ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:37:09.749837Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-12-04T21:37:09.754013Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":926,"took":"3.700011ms","hash":1708229537,"current-db-size-bytes":2293760,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-04T21:37:09.754083Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1708229537,"revision":926,"compact-revision":683}
	{"level":"warn","ts":"2024-12-04T21:37:48.813749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.266292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16656444008792571344 > lease_revoke:<id:672793938d43bd72>","response":"size:29"}
	
	
	==> kernel <==
	 21:38:55 up 21 min,  0 users,  load average: 0.23, 0.18, 0.17
	Linux default-k8s-diff-port-439360 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8779528fd3a8e48736f80b3447158737acd31f3a41b1df1731450f7833f3130b] <==
	I1204 21:35:12.283333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:35:12.283445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:37:11.280667       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:37:11.280879       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1204 21:37:12.282644       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:37:12.282709       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:37:12.282825       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:37:12.282950       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:37:12.283855       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:37:12.285002       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:38:12.284274       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:38:12.284331       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1204 21:38:12.285554       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:38:12.285637       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:38:12.285673       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:38:12.286953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fb96ebc2d974c72758d07c7875d98a752533ced5eb54b1c7f4ca0c53095be5aa] <==
	W1204 21:22:01.234017       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.252515       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.253855       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.337858       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.363545       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.383740       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.396651       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.412320       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.453990       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.467137       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.503860       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.643814       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.679843       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.682387       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.702494       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.782705       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:01.828876       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:02.036317       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.434081       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.634515       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.781636       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.798424       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:05.938879       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:06.036918       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:06.232884       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [64491d8a2a16510237d719e522c5e4524c22bb1a2ecfa263012dc120a3972ddf] <==
	E1204 21:33:48.353047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:33:48.910784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:34:18.359438       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:18.918221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:34:48.367844       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:48.928046       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:18.375391       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:18.937390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:48.382508       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:48.945550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:18.390378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:18.952898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:48.397753       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:48.962409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:37:18.405805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:37:18.972457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:37:41.582290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-439360"
	E1204 21:37:48.411507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:37:48.980067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:38:18.419540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:38:18.987961       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:38:38.745581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="324.49µs"
	E1204 21:38:48.428256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:38:48.997656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:38:50.742408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="147.819µs"
	
	
	==> kube-proxy [bdc56ecdf83e343318645562bd40f06c9f9227a2f2602338236ae686ab4dded6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:22:19.871201       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:22:19.913719       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.171"]
	E1204 21:22:19.913806       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:22:19.976137       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:22:19.976232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:22:19.976267       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:22:19.981289       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:22:19.981540       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:22:19.981563       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:22:19.982878       1 config.go:199] "Starting service config controller"
	I1204 21:22:19.982920       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:22:19.982948       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:22:19.982951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:22:19.983433       1 config.go:328] "Starting node config controller"
	I1204 21:22:19.983444       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:22:20.083234       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:22:20.083251       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:22:20.083497       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00b75c8d0ab80b666442d06e93bbf812ef4957999d26b968b9f4a2d10e74d617] <==
	E1204 21:22:11.338485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:11.338500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 21:22:11.338550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:11.338651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:11.338818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 21:22:11.338907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.166356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1204 21:22:12.166406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.175786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.175839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.211784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:12.211837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.220728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.220907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.393195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:22:12.393375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.404830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:12.404947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.448613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:12.448736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:12.789031       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:22:12.789121       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:22:15.431843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:37:55 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:37:55.730105    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:38:04 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:04.032671    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348284032279634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:04 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:04.032878    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348284032279634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:10 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:10.727511    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:38:13 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:13.778861    2959 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:38:13 default-k8s-diff-port-439360 kubelet[2959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:38:13 default-k8s-diff-port-439360 kubelet[2959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:38:13 default-k8s-diff-port-439360 kubelet[2959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:38:13 default-k8s-diff-port-439360 kubelet[2959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:38:14 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:14.034074    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348294033708914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:14 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:14.034101    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348294033708914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.035671    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348304035355158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.036174    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348304035355158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.756218    2959 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.756297    2959 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.756512    2959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx4bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-v88hj_kube-system(9b6c696c-e110-4d53-98c9-41069407b45b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 04 21:38:24 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:24.757910    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:38:34 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:34.038326    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348314037808918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:34 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:34.038965    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348314037808918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:38 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:38.728124    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:38:44 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:44.040949    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348324040642662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:44 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:44.041481    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348324040642662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:50 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:50.727980    2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-v88hj" podUID="9b6c696c-e110-4d53-98c9-41069407b45b"
	Dec 04 21:38:54 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:54.043937    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348334043542646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:38:54 default-k8s-diff-port-439360 kubelet[2959]: E1204 21:38:54.043972    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348334043542646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [af3eab35b327df56d0b9adc9cc015a61fc7208bc3a2a17daa9616744bb06dda4] <==
	I1204 21:22:21.203588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:22:21.222683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:22:21.222752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:22:21.242769       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:22:21.242935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283!
	I1204 21:22:21.244328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36659ab1-e91c-46ee-9596-ccf7a2652af3", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283 became leader
	I1204 21:22:21.343729       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-439360_1bb6cbbc-d21a-4bd3-a82d-d9cedbb2e283!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-v88hj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj: exit status 1 (60.823565ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-v88hj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-439360 describe pod metrics-server-6867b74b74-v88hj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (443.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (305.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534766 -n no-preload-534766
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-04 21:37:06.025338238 +0000 UTC m=+6275.125066662
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-534766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-534766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.318µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-534766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534766 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-534766 logs -n 25: (1.409725967s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC | 04 Dec 24 21:36 UTC |
	| start   | -p newest-cni-594114 --memory=2200 --alsologtostderr   | newest-cni-594114            | jenkins | v1.34.0 | 04 Dec 24 21:36 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:36:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:36:19.713815   82304 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:36:19.713942   82304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:36:19.713953   82304 out.go:358] Setting ErrFile to fd 2...
	I1204 21:36:19.713960   82304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:36:19.714140   82304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:36:19.714740   82304 out.go:352] Setting JSON to false
	I1204 21:36:19.715768   82304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8330,"bootTime":1733339850,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:36:19.715832   82304 start.go:139] virtualization: kvm guest
	I1204 21:36:19.718086   82304 out.go:177] * [newest-cni-594114] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:36:19.719468   82304 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:36:19.719509   82304 notify.go:220] Checking for updates...
	I1204 21:36:19.721957   82304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:36:19.723397   82304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:36:19.724669   82304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:36:19.725744   82304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:36:19.726908   82304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:36:19.728599   82304 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:36:19.728689   82304 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:36:19.728779   82304 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:36:19.728897   82304 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:36:19.767183   82304 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:36:19.768350   82304 start.go:297] selected driver: kvm2
	I1204 21:36:19.768367   82304 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:36:19.768381   82304 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:36:19.769106   82304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:36:19.769223   82304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:36:19.785007   82304 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:36:19.785052   82304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1204 21:36:19.785104   82304 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1204 21:36:19.785311   82304 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 21:36:19.785338   82304 cni.go:84] Creating CNI manager for ""
	I1204 21:36:19.785380   82304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:36:19.785392   82304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 21:36:19.785435   82304 start.go:340] cluster config:
	{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:36:19.785556   82304 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:36:19.787600   82304 out.go:177] * Starting "newest-cni-594114" primary control-plane node in "newest-cni-594114" cluster
	I1204 21:36:19.788762   82304 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:36:19.788797   82304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:36:19.788818   82304 cache.go:56] Caching tarball of preloaded images
	I1204 21:36:19.788890   82304 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:36:19.788901   82304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:36:19.788987   82304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json ...
	I1204 21:36:19.789003   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json: {Name:mke6e053f0b99e6aaf684627bba45207f7adb35b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:19.789137   82304 start.go:360] acquireMachinesLock for newest-cni-594114: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:36:19.789165   82304 start.go:364] duration metric: took 15.964µs to acquireMachinesLock for "newest-cni-594114"
	I1204 21:36:19.789181   82304 start.go:93] Provisioning new machine with config: &{Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:36:19.789229   82304 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 21:36:19.790721   82304 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 21:36:19.790867   82304 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:36:19.790907   82304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:36:19.806060   82304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I1204 21:36:19.806535   82304 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:36:19.807150   82304 main.go:141] libmachine: Using API Version  1
	I1204 21:36:19.807173   82304 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:36:19.807566   82304 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:36:19.807786   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:36:19.807950   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:19.808110   82304 start.go:159] libmachine.API.Create for "newest-cni-594114" (driver="kvm2")
	I1204 21:36:19.808143   82304 client.go:168] LocalClient.Create starting
	I1204 21:36:19.808177   82304 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem
	I1204 21:36:19.808215   82304 main.go:141] libmachine: Decoding PEM data...
	I1204 21:36:19.808241   82304 main.go:141] libmachine: Parsing certificate...
	I1204 21:36:19.808317   82304 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem
	I1204 21:36:19.808351   82304 main.go:141] libmachine: Decoding PEM data...
	I1204 21:36:19.808374   82304 main.go:141] libmachine: Parsing certificate...
	I1204 21:36:19.808405   82304 main.go:141] libmachine: Running pre-create checks...
	I1204 21:36:19.808418   82304 main.go:141] libmachine: (newest-cni-594114) Calling .PreCreateCheck
	I1204 21:36:19.808833   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetConfigRaw
	I1204 21:36:19.809234   82304 main.go:141] libmachine: Creating machine...
	I1204 21:36:19.809250   82304 main.go:141] libmachine: (newest-cni-594114) Calling .Create
	I1204 21:36:19.809434   82304 main.go:141] libmachine: (newest-cni-594114) Creating KVM machine...
	I1204 21:36:19.810704   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found existing default KVM network
	I1204 21:36:19.812075   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:19.811926   82327 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:2d:f5} reservation:<nil>}
	I1204 21:36:19.812885   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:19.812831   82327 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:7f:34} reservation:<nil>}
	I1204 21:36:19.813708   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:19.813636   82327 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:39:28} reservation:<nil>}
	I1204 21:36:19.814884   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:19.814807   82327 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b8a0}
	I1204 21:36:19.814951   82304 main.go:141] libmachine: (newest-cni-594114) DBG | created network xml: 
	I1204 21:36:19.814973   82304 main.go:141] libmachine: (newest-cni-594114) DBG | <network>
	I1204 21:36:19.814992   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   <name>mk-newest-cni-594114</name>
	I1204 21:36:19.815010   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   <dns enable='no'/>
	I1204 21:36:19.815026   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   
	I1204 21:36:19.815047   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1204 21:36:19.815061   82304 main.go:141] libmachine: (newest-cni-594114) DBG |     <dhcp>
	I1204 21:36:19.815073   82304 main.go:141] libmachine: (newest-cni-594114) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1204 21:36:19.815081   82304 main.go:141] libmachine: (newest-cni-594114) DBG |     </dhcp>
	I1204 21:36:19.815090   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   </ip>
	I1204 21:36:19.815098   82304 main.go:141] libmachine: (newest-cni-594114) DBG |   
	I1204 21:36:19.815104   82304 main.go:141] libmachine: (newest-cni-594114) DBG | </network>
	I1204 21:36:19.815113   82304 main.go:141] libmachine: (newest-cni-594114) DBG | 
	I1204 21:36:19.820074   82304 main.go:141] libmachine: (newest-cni-594114) DBG | trying to create private KVM network mk-newest-cni-594114 192.168.72.0/24...
	I1204 21:36:19.903266   82304 main.go:141] libmachine: (newest-cni-594114) DBG | private KVM network mk-newest-cni-594114 192.168.72.0/24 created
	I1204 21:36:19.903302   82304 main.go:141] libmachine: (newest-cni-594114) Setting up store path in /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114 ...
	I1204 21:36:19.903316   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:19.903266   82327 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:36:19.903338   82304 main.go:141] libmachine: (newest-cni-594114) Building disk image from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 21:36:19.903497   82304 main.go:141] libmachine: (newest-cni-594114) Downloading /home/jenkins/minikube-integration/19985-10581/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 21:36:20.154442   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:20.154325   82327 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa...
	I1204 21:36:20.255760   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:20.255635   82327 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/newest-cni-594114.rawdisk...
	I1204 21:36:20.255788   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Writing magic tar header
	I1204 21:36:20.255805   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Writing SSH key tar header
	I1204 21:36:20.255904   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:20.255799   82327 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114 ...
	I1204 21:36:20.255969   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114
	I1204 21:36:20.255988   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube/machines
	I1204 21:36:20.256003   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114 (perms=drwx------)
	I1204 21:36:20.256035   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube/machines (perms=drwxr-xr-x)
	I1204 21:36:20.256048   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:36:20.256059   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581/.minikube (perms=drwxr-xr-x)
	I1204 21:36:20.256074   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19985-10581
	I1204 21:36:20.256102   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 21:36:20.256116   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins/minikube-integration/19985-10581 (perms=drwxrwxr-x)
	I1204 21:36:20.256133   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 21:36:20.256145   82304 main.go:141] libmachine: (newest-cni-594114) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 21:36:20.256156   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home/jenkins
	I1204 21:36:20.256171   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Checking permissions on dir: /home
	I1204 21:36:20.256191   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Skipping /home - not owner
	I1204 21:36:20.256208   82304 main.go:141] libmachine: (newest-cni-594114) Creating domain...
	I1204 21:36:20.257414   82304 main.go:141] libmachine: (newest-cni-594114) define libvirt domain using xml: 
	I1204 21:36:20.257435   82304 main.go:141] libmachine: (newest-cni-594114) <domain type='kvm'>
	I1204 21:36:20.257445   82304 main.go:141] libmachine: (newest-cni-594114)   <name>newest-cni-594114</name>
	I1204 21:36:20.257456   82304 main.go:141] libmachine: (newest-cni-594114)   <memory unit='MiB'>2200</memory>
	I1204 21:36:20.257484   82304 main.go:141] libmachine: (newest-cni-594114)   <vcpu>2</vcpu>
	I1204 21:36:20.257508   82304 main.go:141] libmachine: (newest-cni-594114)   <features>
	I1204 21:36:20.257518   82304 main.go:141] libmachine: (newest-cni-594114)     <acpi/>
	I1204 21:36:20.257527   82304 main.go:141] libmachine: (newest-cni-594114)     <apic/>
	I1204 21:36:20.257534   82304 main.go:141] libmachine: (newest-cni-594114)     <pae/>
	I1204 21:36:20.257544   82304 main.go:141] libmachine: (newest-cni-594114)     
	I1204 21:36:20.257552   82304 main.go:141] libmachine: (newest-cni-594114)   </features>
	I1204 21:36:20.257582   82304 main.go:141] libmachine: (newest-cni-594114)   <cpu mode='host-passthrough'>
	I1204 21:36:20.257593   82304 main.go:141] libmachine: (newest-cni-594114)   
	I1204 21:36:20.257599   82304 main.go:141] libmachine: (newest-cni-594114)   </cpu>
	I1204 21:36:20.257610   82304 main.go:141] libmachine: (newest-cni-594114)   <os>
	I1204 21:36:20.257621   82304 main.go:141] libmachine: (newest-cni-594114)     <type>hvm</type>
	I1204 21:36:20.257633   82304 main.go:141] libmachine: (newest-cni-594114)     <boot dev='cdrom'/>
	I1204 21:36:20.257644   82304 main.go:141] libmachine: (newest-cni-594114)     <boot dev='hd'/>
	I1204 21:36:20.257656   82304 main.go:141] libmachine: (newest-cni-594114)     <bootmenu enable='no'/>
	I1204 21:36:20.257666   82304 main.go:141] libmachine: (newest-cni-594114)   </os>
	I1204 21:36:20.257674   82304 main.go:141] libmachine: (newest-cni-594114)   <devices>
	I1204 21:36:20.257686   82304 main.go:141] libmachine: (newest-cni-594114)     <disk type='file' device='cdrom'>
	I1204 21:36:20.257702   82304 main.go:141] libmachine: (newest-cni-594114)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/boot2docker.iso'/>
	I1204 21:36:20.257721   82304 main.go:141] libmachine: (newest-cni-594114)       <target dev='hdc' bus='scsi'/>
	I1204 21:36:20.257732   82304 main.go:141] libmachine: (newest-cni-594114)       <readonly/>
	I1204 21:36:20.257745   82304 main.go:141] libmachine: (newest-cni-594114)     </disk>
	I1204 21:36:20.257758   82304 main.go:141] libmachine: (newest-cni-594114)     <disk type='file' device='disk'>
	I1204 21:36:20.257770   82304 main.go:141] libmachine: (newest-cni-594114)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 21:36:20.257790   82304 main.go:141] libmachine: (newest-cni-594114)       <source file='/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/newest-cni-594114.rawdisk'/>
	I1204 21:36:20.257801   82304 main.go:141] libmachine: (newest-cni-594114)       <target dev='hda' bus='virtio'/>
	I1204 21:36:20.257812   82304 main.go:141] libmachine: (newest-cni-594114)     </disk>
	I1204 21:36:20.257828   82304 main.go:141] libmachine: (newest-cni-594114)     <interface type='network'>
	I1204 21:36:20.257844   82304 main.go:141] libmachine: (newest-cni-594114)       <source network='mk-newest-cni-594114'/>
	I1204 21:36:20.257858   82304 main.go:141] libmachine: (newest-cni-594114)       <model type='virtio'/>
	I1204 21:36:20.257870   82304 main.go:141] libmachine: (newest-cni-594114)     </interface>
	I1204 21:36:20.257881   82304 main.go:141] libmachine: (newest-cni-594114)     <interface type='network'>
	I1204 21:36:20.257892   82304 main.go:141] libmachine: (newest-cni-594114)       <source network='default'/>
	I1204 21:36:20.257902   82304 main.go:141] libmachine: (newest-cni-594114)       <model type='virtio'/>
	I1204 21:36:20.257912   82304 main.go:141] libmachine: (newest-cni-594114)     </interface>
	I1204 21:36:20.257926   82304 main.go:141] libmachine: (newest-cni-594114)     <serial type='pty'>
	I1204 21:36:20.257938   82304 main.go:141] libmachine: (newest-cni-594114)       <target port='0'/>
	I1204 21:36:20.257948   82304 main.go:141] libmachine: (newest-cni-594114)     </serial>
	I1204 21:36:20.257959   82304 main.go:141] libmachine: (newest-cni-594114)     <console type='pty'>
	I1204 21:36:20.257970   82304 main.go:141] libmachine: (newest-cni-594114)       <target type='serial' port='0'/>
	I1204 21:36:20.257981   82304 main.go:141] libmachine: (newest-cni-594114)     </console>
	I1204 21:36:20.258002   82304 main.go:141] libmachine: (newest-cni-594114)     <rng model='virtio'>
	I1204 21:36:20.258014   82304 main.go:141] libmachine: (newest-cni-594114)       <backend model='random'>/dev/random</backend>
	I1204 21:36:20.258024   82304 main.go:141] libmachine: (newest-cni-594114)     </rng>
	I1204 21:36:20.258034   82304 main.go:141] libmachine: (newest-cni-594114)     
	I1204 21:36:20.258044   82304 main.go:141] libmachine: (newest-cni-594114)     
	I1204 21:36:20.258055   82304 main.go:141] libmachine: (newest-cni-594114)   </devices>
	I1204 21:36:20.258069   82304 main.go:141] libmachine: (newest-cni-594114) </domain>
	I1204 21:36:20.258082   82304 main.go:141] libmachine: (newest-cni-594114) 
	I1204 21:36:20.262841   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b2:05:e0 in network default
	I1204 21:36:20.263446   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:20.263468   82304 main.go:141] libmachine: (newest-cni-594114) Ensuring networks are active...
	I1204 21:36:20.264207   82304 main.go:141] libmachine: (newest-cni-594114) Ensuring network default is active
	I1204 21:36:20.264677   82304 main.go:141] libmachine: (newest-cni-594114) Ensuring network mk-newest-cni-594114 is active
	I1204 21:36:20.265304   82304 main.go:141] libmachine: (newest-cni-594114) Getting domain xml...
	I1204 21:36:20.266087   82304 main.go:141] libmachine: (newest-cni-594114) Creating domain...
	I1204 21:36:21.541130   82304 main.go:141] libmachine: (newest-cni-594114) Waiting to get IP...
	I1204 21:36:21.542184   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:21.542623   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:21.542652   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:21.542609   82327 retry.go:31] will retry after 256.942789ms: waiting for machine to come up
	I1204 21:36:21.801360   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:21.801952   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:21.801998   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:21.801886   82327 retry.go:31] will retry after 274.687007ms: waiting for machine to come up
	I1204 21:36:22.078485   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:22.078995   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:22.079036   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:22.078925   82327 retry.go:31] will retry after 408.509599ms: waiting for machine to come up
	I1204 21:36:22.489637   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:22.490105   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:22.490127   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:22.490073   82327 retry.go:31] will retry after 436.88181ms: waiting for machine to come up
	I1204 21:36:22.928664   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:22.929096   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:22.929128   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:22.929045   82327 retry.go:31] will retry after 741.822703ms: waiting for machine to come up
	I1204 21:36:23.671933   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:23.672340   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:23.672381   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:23.672298   82327 retry.go:31] will retry after 895.760085ms: waiting for machine to come up
	I1204 21:36:24.569448   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:24.569928   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:24.569953   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:24.569900   82327 retry.go:31] will retry after 881.995573ms: waiting for machine to come up
	I1204 21:36:25.453108   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:25.453627   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:25.453649   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:25.453579   82327 retry.go:31] will retry after 1.228048618s: waiting for machine to come up
	I1204 21:36:26.683208   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:26.683614   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:26.683643   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:26.683571   82327 retry.go:31] will retry after 1.442404524s: waiting for machine to come up
	I1204 21:36:28.128426   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:28.128866   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:28.128888   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:28.128826   82327 retry.go:31] will retry after 2.123011307s: waiting for machine to come up
	I1204 21:36:30.253870   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:30.254339   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:30.254366   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:30.254281   82327 retry.go:31] will retry after 2.868422732s: waiting for machine to come up
	I1204 21:36:33.125736   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:33.126118   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:33.126156   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:33.126101   82327 retry.go:31] will retry after 3.072534388s: waiting for machine to come up
	I1204 21:36:36.199737   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:36.200176   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:36.200208   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:36.200132   82327 retry.go:31] will retry after 2.86934444s: waiting for machine to come up
	I1204 21:36:39.072523   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:39.072955   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find current IP address of domain newest-cni-594114 in network mk-newest-cni-594114
	I1204 21:36:39.072987   82304 main.go:141] libmachine: (newest-cni-594114) DBG | I1204 21:36:39.072903   82327 retry.go:31] will retry after 5.334966451s: waiting for machine to come up
	I1204 21:36:44.412925   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.413509   82304 main.go:141] libmachine: (newest-cni-594114) Found IP for machine: 192.168.72.161
	I1204 21:36:44.413543   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has current primary IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.413553   82304 main.go:141] libmachine: (newest-cni-594114) Reserving static IP address...
	I1204 21:36:44.414054   82304 main.go:141] libmachine: (newest-cni-594114) DBG | unable to find host DHCP lease matching {name: "newest-cni-594114", mac: "52:54:00:b8:cc:25", ip: "192.168.72.161"} in network mk-newest-cni-594114
	I1204 21:36:44.494966   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Getting to WaitForSSH function...
	I1204 21:36:44.494988   82304 main.go:141] libmachine: (newest-cni-594114) Reserved static IP address: 192.168.72.161
	I1204 21:36:44.495008   82304 main.go:141] libmachine: (newest-cni-594114) Waiting for SSH to be available...
	I1204 21:36:44.497585   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.497968   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:44.498010   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.498114   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Using SSH client type: external
	I1204 21:36:44.498253   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa (-rw-------)
	I1204 21:36:44.498327   82304 main.go:141] libmachine: (newest-cni-594114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:36:44.498357   82304 main.go:141] libmachine: (newest-cni-594114) DBG | About to run SSH command:
	I1204 21:36:44.498397   82304 main.go:141] libmachine: (newest-cni-594114) DBG | exit 0
	I1204 21:36:44.619736   82304 main.go:141] libmachine: (newest-cni-594114) DBG | SSH cmd err, output: <nil>: 
	I1204 21:36:44.620037   82304 main.go:141] libmachine: (newest-cni-594114) KVM machine creation complete!
	I1204 21:36:44.620487   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetConfigRaw
	I1204 21:36:44.621126   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:44.621367   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:44.621569   82304 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 21:36:44.621583   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetState
	I1204 21:36:44.622956   82304 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 21:36:44.622971   82304 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 21:36:44.622978   82304 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 21:36:44.622986   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:44.625151   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.625639   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:44.625670   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.625835   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:44.626060   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.626234   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.626409   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:44.626572   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:44.626831   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:44.626848   82304 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 21:36:44.730415   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:36:44.730437   82304 main.go:141] libmachine: Detecting the provisioner...
	I1204 21:36:44.730447   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:44.733594   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.733973   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:44.734004   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.734200   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:44.734418   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.734613   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.734772   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:44.734949   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:44.735111   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:44.735122   82304 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 21:36:44.831812   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 21:36:44.831911   82304 main.go:141] libmachine: found compatible host: buildroot
	I1204 21:36:44.831925   82304 main.go:141] libmachine: Provisioning with buildroot...
	I1204 21:36:44.831933   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:36:44.832192   82304 buildroot.go:166] provisioning hostname "newest-cni-594114"
	I1204 21:36:44.832225   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:36:44.832443   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:44.835277   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.835702   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:44.835731   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.835944   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:44.836154   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.836336   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.836491   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:44.836663   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:44.836831   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:44.836846   82304 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-594114 && echo "newest-cni-594114" | sudo tee /etc/hostname
	I1204 21:36:44.949437   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-594114
	
	I1204 21:36:44.949467   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:44.952157   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.952462   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:44.952488   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:44.952672   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:44.952868   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.953011   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:44.953138   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:44.953353   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:44.953575   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:44.953595   82304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-594114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-594114/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-594114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:36:45.061734   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:36:45.061768   82304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:36:45.061828   82304 buildroot.go:174] setting up certificates
	I1204 21:36:45.061854   82304 provision.go:84] configureAuth start
	I1204 21:36:45.061875   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetMachineName
	I1204 21:36:45.062180   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:36:45.065225   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.065634   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.065697   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.065902   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.068402   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.068755   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.068790   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.068922   82304 provision.go:143] copyHostCerts
	I1204 21:36:45.068990   82304 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:36:45.069001   82304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:36:45.069072   82304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:36:45.069157   82304 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:36:45.069166   82304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:36:45.069190   82304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:36:45.069266   82304 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:36:45.069277   82304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:36:45.069316   82304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:36:45.069375   82304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-594114 san=[127.0.0.1 192.168.72.161 localhost minikube newest-cni-594114]
	I1204 21:36:45.161319   82304 provision.go:177] copyRemoteCerts
	I1204 21:36:45.161373   82304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:36:45.161397   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.164349   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.164700   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.164727   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.164966   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.165164   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.165312   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.165454   82304 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:36:45.245905   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:36:45.272454   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:36:45.295811   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:36:45.320475   82304 provision.go:87] duration metric: took 258.604582ms to configureAuth
	I1204 21:36:45.320503   82304 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:36:45.320719   82304 config.go:182] Loaded profile config "newest-cni-594114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:36:45.320820   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.323923   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.324279   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.324306   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.324547   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.324724   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.324844   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.324989   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.325126   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:45.325326   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:45.325341   82304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:36:45.540521   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:36:45.540543   82304 main.go:141] libmachine: Checking connection to Docker...
	I1204 21:36:45.540551   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetURL
	I1204 21:36:45.541970   82304 main.go:141] libmachine: (newest-cni-594114) DBG | Using libvirt version 6000000
	I1204 21:36:45.544238   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.544843   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.544873   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.544998   82304 main.go:141] libmachine: Docker is up and running!
	I1204 21:36:45.545014   82304 main.go:141] libmachine: Reticulating splines...
	I1204 21:36:45.545022   82304 client.go:171] duration metric: took 25.736870879s to LocalClient.Create
	I1204 21:36:45.545049   82304 start.go:167] duration metric: took 25.736940553s to libmachine.API.Create "newest-cni-594114"
	I1204 21:36:45.545057   82304 start.go:293] postStartSetup for "newest-cni-594114" (driver="kvm2")
	I1204 21:36:45.545070   82304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:36:45.545094   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:45.545335   82304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:36:45.545359   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.547753   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.548220   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.548244   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.548423   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.548612   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.548839   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.549009   82304 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:36:45.629371   82304 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:36:45.633810   82304 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:36:45.633833   82304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:36:45.633889   82304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:36:45.633987   82304 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:36:45.634085   82304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:36:45.643344   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:36:45.666736   82304 start.go:296] duration metric: took 121.664374ms for postStartSetup
	I1204 21:36:45.666792   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetConfigRaw
	I1204 21:36:45.667392   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:36:45.669917   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.670290   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.670314   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.670540   82304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/config.json ...
	I1204 21:36:45.670706   82304 start.go:128] duration metric: took 25.881467591s to createHost
	I1204 21:36:45.670726   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.673408   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.673745   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.673780   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.673920   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.674090   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.674253   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.674422   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.674591   82304 main.go:141] libmachine: Using SSH client type: native
	I1204 21:36:45.674780   82304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.161 22 <nil> <nil>}
	I1204 21:36:45.674796   82304 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:36:45.772189   82304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733348205.749113665
	
	I1204 21:36:45.772213   82304 fix.go:216] guest clock: 1733348205.749113665
	I1204 21:36:45.772222   82304 fix.go:229] Guest: 2024-12-04 21:36:45.749113665 +0000 UTC Remote: 2024-12-04 21:36:45.670716815 +0000 UTC m=+25.996042576 (delta=78.39685ms)
	I1204 21:36:45.772266   82304 fix.go:200] guest clock delta is within tolerance: 78.39685ms
	I1204 21:36:45.772274   82304 start.go:83] releasing machines lock for "newest-cni-594114", held for 25.98309953s
	I1204 21:36:45.772306   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:45.772650   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:36:45.775286   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.775692   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.775724   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.775868   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:45.776325   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:45.776505   82304 main.go:141] libmachine: (newest-cni-594114) Calling .DriverName
	I1204 21:36:45.776603   82304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:36:45.776638   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.776748   82304 ssh_runner.go:195] Run: cat /version.json
	I1204 21:36:45.776766   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHHostname
	I1204 21:36:45.779545   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.779590   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.779896   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.779925   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.779957   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:45.779969   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:45.780095   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.780284   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHPort
	I1204 21:36:45.780290   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.780503   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHKeyPath
	I1204 21:36:45.780510   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.780665   82304 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:36:45.780682   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetSSHUsername
	I1204 21:36:45.780804   82304 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/newest-cni-594114/id_rsa Username:docker}
	I1204 21:36:45.878281   82304 ssh_runner.go:195] Run: systemctl --version
	I1204 21:36:45.884638   82304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:36:46.042674   82304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:36:46.048884   82304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:36:46.048980   82304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:36:46.065974   82304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:36:46.066001   82304 start.go:495] detecting cgroup driver to use...
	I1204 21:36:46.066077   82304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:36:46.083313   82304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:36:46.098311   82304 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:36:46.098384   82304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:36:46.113321   82304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:36:46.127288   82304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:36:46.243073   82304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:36:46.413507   82304 docker.go:233] disabling docker service ...
	I1204 21:36:46.413588   82304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:36:46.428213   82304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:36:46.442177   82304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:36:46.552010   82304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:36:46.674051   82304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:36:46.687840   82304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:36:46.706417   82304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:36:46.706471   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.716381   82304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:36:46.716488   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.726927   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.738375   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.749486   82304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:36:46.760016   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.770328   82304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.786704   82304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:36:46.796541   82304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:36:46.805950   82304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:36:46.806014   82304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:36:46.819191   82304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:36:46.828719   82304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:36:46.955619   82304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:36:47.045991   82304 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:36:47.046078   82304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:36:47.050796   82304 start.go:563] Will wait 60s for crictl version
	I1204 21:36:47.050854   82304 ssh_runner.go:195] Run: which crictl
	I1204 21:36:47.054280   82304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:36:47.096070   82304 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:36:47.096153   82304 ssh_runner.go:195] Run: crio --version
	I1204 21:36:47.123957   82304 ssh_runner.go:195] Run: crio --version
	I1204 21:36:47.152030   82304 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:36:47.153496   82304 main.go:141] libmachine: (newest-cni-594114) Calling .GetIP
	I1204 21:36:47.156167   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:47.156517   82304 main.go:141] libmachine: (newest-cni-594114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cc:25", ip: ""} in network mk-newest-cni-594114: {Iface:virbr3 ExpiryTime:2024-12-04 22:36:34 +0000 UTC Type:0 Mac:52:54:00:b8:cc:25 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:newest-cni-594114 Clientid:01:52:54:00:b8:cc:25}
	I1204 21:36:47.156559   82304 main.go:141] libmachine: (newest-cni-594114) DBG | domain newest-cni-594114 has defined IP address 192.168.72.161 and MAC address 52:54:00:b8:cc:25 in network mk-newest-cni-594114
	I1204 21:36:47.156918   82304 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:36:47.161047   82304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:36:47.174626   82304 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1204 21:36:47.176160   82304 kubeadm.go:883] updating cluster {Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:36:47.176325   82304 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:36:47.176409   82304 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:36:47.207587   82304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:36:47.207662   82304 ssh_runner.go:195] Run: which lz4
	I1204 21:36:47.211686   82304 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:36:47.215782   82304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:36:47.215820   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:36:48.474540   82304 crio.go:462] duration metric: took 1.262887705s to copy over tarball
	I1204 21:36:48.474613   82304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:36:50.584925   82304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110281964s)
	I1204 21:36:50.584951   82304 crio.go:469] duration metric: took 2.11038207s to extract the tarball
	I1204 21:36:50.584958   82304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:36:50.623665   82304 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:36:50.665195   82304 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:36:50.665224   82304 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:36:50.665235   82304 kubeadm.go:934] updating node { 192.168.72.161 8443 v1.31.2 crio true true} ...
	I1204 21:36:50.665360   82304 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-594114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:36:50.665464   82304 ssh_runner.go:195] Run: crio config
	I1204 21:36:50.711275   82304 cni.go:84] Creating CNI manager for ""
	I1204 21:36:50.711300   82304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:36:50.711311   82304 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1204 21:36:50.711340   82304 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.161 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-594114 NodeName:newest-cni-594114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:36:50.711586   82304 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-594114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.161"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.161"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:36:50.711669   82304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:36:50.721532   82304 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:36:50.721605   82304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:36:50.730696   82304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1204 21:36:50.748096   82304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:36:50.766162   82304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1204 21:36:50.782611   82304 ssh_runner.go:195] Run: grep 192.168.72.161	control-plane.minikube.internal$ /etc/hosts
	I1204 21:36:50.786360   82304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:36:50.797543   82304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:36:50.926300   82304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:36:50.945550   82304 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114 for IP: 192.168.72.161
	I1204 21:36:50.945571   82304 certs.go:194] generating shared ca certs ...
	I1204 21:36:50.945592   82304 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:50.945773   82304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:36:50.945829   82304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:36:50.945844   82304 certs.go:256] generating profile certs ...
	I1204 21:36:50.945912   82304 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.key
	I1204 21:36:50.945940   82304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.crt with IP's: []
	I1204 21:36:51.155457   82304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.crt ...
	I1204 21:36:51.155482   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.crt: {Name:mkf67ae491a2f8d95aa42017bdb5e3d8e7696ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.155685   82304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.key ...
	I1204 21:36:51.155705   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/client.key: {Name:mkf7c4e31704c66b8d148047c47a8ab348fce6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.155845   82304 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key.19fd90cf
	I1204 21:36:51.155871   82304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt.19fd90cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.161]
	I1204 21:36:51.242643   82304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt.19fd90cf ...
	I1204 21:36:51.242670   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt.19fd90cf: {Name:mk8741d4d0e25b02162b62a574eb834cfe583b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.242820   82304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key.19fd90cf ...
	I1204 21:36:51.242831   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key.19fd90cf: {Name:mke7c13ff36171cec74f1fe5645bec0dd581eed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.242898   82304 certs.go:381] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt.19fd90cf -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt
	I1204 21:36:51.242984   82304 certs.go:385] copying /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key.19fd90cf -> /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key
	I1204 21:36:51.243055   82304 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key
	I1204 21:36:51.243071   82304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.crt with IP's: []
	I1204 21:36:51.360771   82304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.crt ...
	I1204 21:36:51.360799   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.crt: {Name:mk74dbe4db1974019a61d00e79de993440a0538d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.360958   82304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key ...
	I1204 21:36:51.360971   82304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key: {Name:mkb6d561d35442504c4ecd3e00e8f62b8e604026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:36:51.361159   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:36:51.361196   82304 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:36:51.361204   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:36:51.361229   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:36:51.361251   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:36:51.361273   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:36:51.361316   82304 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:36:51.361915   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:36:51.387673   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:36:51.412076   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:36:51.436356   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:36:51.459425   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:36:51.483393   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:36:51.506337   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:36:51.529574   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/newest-cni-594114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:36:51.553416   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:36:51.576391   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:36:51.601853   82304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:36:51.625868   82304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:36:51.641929   82304 ssh_runner.go:195] Run: openssl version
	I1204 21:36:51.647594   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:36:51.658008   82304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:36:51.662502   82304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:36:51.662560   82304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:36:51.668328   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:36:51.679871   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:36:51.691480   82304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:36:51.695852   82304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:36:51.695916   82304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:36:51.701405   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:36:51.717349   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:36:51.732747   82304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:36:51.740080   82304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:36:51.740155   82304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:36:51.749924   82304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:36:51.765530   82304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:36:51.771390   82304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 21:36:51.771446   82304 kubeadm.go:392] StartCluster: {Name:newest-cni-594114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-594114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:36:51.771531   82304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:36:51.771579   82304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:36:51.810079   82304 cri.go:89] found id: ""
	I1204 21:36:51.810168   82304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:36:51.819920   82304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:36:51.828921   82304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:36:51.838072   82304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:36:51.838092   82304 kubeadm.go:157] found existing configuration files:
	
	I1204 21:36:51.838141   82304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:36:51.848145   82304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:36:51.848225   82304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:36:51.856834   82304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:36:51.865742   82304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:36:51.865804   82304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:36:51.874606   82304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:36:51.883115   82304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:36:51.883184   82304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:36:51.892054   82304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:36:51.900754   82304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:36:51.900816   82304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:36:51.910018   82304 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:36:52.120763   82304 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:37:02.765906   82304 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:37:02.765974   82304 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:37:02.766059   82304 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:37:02.766209   82304 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:37:02.766320   82304 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:37:02.766412   82304 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:37:02.768105   82304 out.go:235]   - Generating certificates and keys ...
	I1204 21:37:02.768217   82304 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:37:02.768308   82304 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:37:02.768411   82304 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 21:37:02.768490   82304 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 21:37:02.768605   82304 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 21:37:02.768699   82304 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 21:37:02.768778   82304 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 21:37:02.768958   82304 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-594114] and IPs [192.168.72.161 127.0.0.1 ::1]
	I1204 21:37:02.769053   82304 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 21:37:02.769226   82304 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-594114] and IPs [192.168.72.161 127.0.0.1 ::1]
	I1204 21:37:02.769326   82304 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 21:37:02.769426   82304 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 21:37:02.769496   82304 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 21:37:02.769571   82304 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:37:02.769646   82304 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:37:02.769741   82304 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:37:02.769819   82304 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:37:02.769905   82304 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:37:02.769980   82304 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:37:02.770083   82304 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:37:02.770171   82304 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:37:02.771537   82304 out.go:235]   - Booting up control plane ...
	I1204 21:37:02.771651   82304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:37:02.771721   82304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:37:02.771783   82304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:37:02.771878   82304 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:37:02.771993   82304 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:37:02.772063   82304 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:37:02.772223   82304 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:37:02.772355   82304 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:37:02.772440   82304 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002172807s
	I1204 21:37:02.772530   82304 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:37:02.772600   82304 kubeadm.go:310] [api-check] The API server is healthy after 5.002814929s
	I1204 21:37:02.772705   82304 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:37:02.772817   82304 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:37:02.772868   82304 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:37:02.773030   82304 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-594114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:37:02.773089   82304 kubeadm.go:310] [bootstrap-token] Using token: nuq3om.dtb3q93vqcisf1vi
	I1204 21:37:02.774315   82304 out.go:235]   - Configuring RBAC rules ...
	I1204 21:37:02.774453   82304 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:37:02.774540   82304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:37:02.774656   82304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:37:02.774763   82304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:37:02.774862   82304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:37:02.774946   82304 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:37:02.775054   82304 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:37:02.775114   82304 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:37:02.775165   82304 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:37:02.775173   82304 kubeadm.go:310] 
	I1204 21:37:02.775246   82304 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:37:02.775257   82304 kubeadm.go:310] 
	I1204 21:37:02.775329   82304 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:37:02.775335   82304 kubeadm.go:310] 
	I1204 21:37:02.775356   82304 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:37:02.775438   82304 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:37:02.775492   82304 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:37:02.775500   82304 kubeadm.go:310] 
	I1204 21:37:02.775544   82304 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:37:02.775554   82304 kubeadm.go:310] 
	I1204 21:37:02.775596   82304 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:37:02.775602   82304 kubeadm.go:310] 
	I1204 21:37:02.775645   82304 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:37:02.775709   82304 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:37:02.775770   82304 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:37:02.775776   82304 kubeadm.go:310] 
	I1204 21:37:02.775849   82304 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:37:02.775917   82304 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:37:02.775924   82304 kubeadm.go:310] 
	I1204 21:37:02.775997   82304 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nuq3om.dtb3q93vqcisf1vi \
	I1204 21:37:02.776083   82304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:37:02.776103   82304 kubeadm.go:310] 	--control-plane 
	I1204 21:37:02.776109   82304 kubeadm.go:310] 
	I1204 21:37:02.776178   82304 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:37:02.776185   82304 kubeadm.go:310] 
	I1204 21:37:02.776253   82304 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nuq3om.dtb3q93vqcisf1vi \
	I1204 21:37:02.776356   82304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:37:02.776366   82304 cni.go:84] Creating CNI manager for ""
	I1204 21:37:02.776373   82304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:37:02.777818   82304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:37:02.778982   82304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:37:02.793895   82304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:37:02.811584   82304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:37:02.811659   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:37:02.811666   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-594114 minikube.k8s.io/updated_at=2024_12_04T21_37_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=newest-cni-594114 minikube.k8s.io/primary=true
	I1204 21:37:03.044320   82304 ops.go:34] apiserver oom_adj: -16
	I1204 21:37:03.044479   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:37:03.545267   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:37:04.045358   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:37:04.544650   82304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.672159604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348226672132573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99c150c7-27bb-4a5e-a182-c65c26e08dc6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.672870242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc11f1c7-0f00-4891-b203-bb75037325a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.672940072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc11f1c7-0f00-4891-b203-bb75037325a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.673152248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc11f1c7-0f00-4891-b203-bb75037325a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.715775442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d46cc0e2-c68a-4693-895c-0528da2e8b17 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.715857434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d46cc0e2-c68a-4693-895c-0528da2e8b17 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.717397166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=500969f1-6e81-4dd1-b783-89092ade34ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.717841700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348226717817336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=500969f1-6e81-4dd1-b783-89092ade34ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.718481748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6863993a-5179-4c6b-8bfc-6009a014333f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.718558886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6863993a-5179-4c6b-8bfc-6009a014333f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.718842886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6863993a-5179-4c6b-8bfc-6009a014333f name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.762060731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9900e35-aa17-40be-afb2-eda8cb3880af name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.762141507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9900e35-aa17-40be-afb2-eda8cb3880af name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.764462217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76a99013-9db4-4a16-bdcb-76b6a01758ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.765086970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348226765057077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76a99013-9db4-4a16-bdcb-76b6a01758ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.765588339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54e22cc3-2b33-486c-a4e3-87e254a410fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.765666064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54e22cc3-2b33-486c-a4e3-87e254a410fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.765935140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54e22cc3-2b33-486c-a4e3-87e254a410fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.809256672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11bcba0a-e8c8-405d-8e4c-7a592b22b01d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.809395021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11bcba0a-e8c8-405d-8e4c-7a592b22b01d name=/runtime.v1.RuntimeService/Version
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.811115892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4edfbebc-d6ea-432f-b3c9-e443f31d3abb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.811784155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348226811494197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4edfbebc-d6ea-432f-b3c9-e443f31d3abb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.812344988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e1df7bf-db2d-4250-9351-c621a3a302cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.812440451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e1df7bf-db2d-4250-9351-c621a3a302cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:37:06 no-preload-534766 crio[714]: time="2024-12-04 21:37:06.812839759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177,PodSandboxId:078786f92c1479654e245d21f17c4ed6585bfbbda77bff979f7ae9d326fb4f00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733347371820032913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fa420a-4372-41b4-9853-64796baa65d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e,PodSandboxId:ce2f25300d1826c51f021d1f55de433604c1ad3c83aee87be4a2fbf1d59af16f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371728068753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zq88f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b818bf-71d4-4522-8d3f-15c878eb7e37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275,PodSandboxId:0ab18f470092e23df8cf385175c799a1b2b79a2324c1410600d8690a81238c48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733347371687565212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9llkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad
c8b2dd-be84-4314-ae3c-cfe94cc78489,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3,PodSandboxId:83993fb0701cfa12e721b4ce3605387d10703cc038b9ba1afbcb7c3b8425ad26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733347371117010395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2n69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea030ab5-1808-4037-b153-e751d66f3882,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72,PodSandboxId:9c05ec903ba442eeda86096944b3cf7505edb03d755538f91b2f5373c2a31f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733347360246392094,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31d01d4c56c390227f2a5f70b72c51e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2,PodSandboxId:8cf25439a775f5bf77fafa1511165707deae689d8c2b0e51224dfaf22cf659c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173334736020862
0083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5066dfdfdc05d0bda5ea458e76e9e5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b,PodSandboxId:7c8e73432a85d16d1953a189c84a88c891e809a432d9ee21df058c91a50f3587,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733347360195193083,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e,PodSandboxId:a28b243929a7cc77c82693665dd501adfe6ce9cb410ff98b3039d8e9f122e08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733347360186546923,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096b1d9d76854415439286e3bb547dee,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d,PodSandboxId:c7ebe613bada24d0565bbfce662e0df572978e50bfc3cfc3e6a9a2f2178f2446,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733347072448285682,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-534766,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb222f98e84815c8d0a8723a7bc263d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e1df7bf-db2d-4250-9351-c621a3a302cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76b3bd9ced1a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   078786f92c147       storage-provisioner
	b3e6bc78060dc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   ce2f25300d182       coredns-7c65d6cfc9-zq88f
	64f833f4d007b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   0ab18f470092e       coredns-7c65d6cfc9-9llkt
	1063f60c44f77       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 minutes ago      Running             kube-proxy                0                   83993fb0701cf       kube-proxy-z2n69
	aad3bddff8032       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   14 minutes ago      Running             kube-controller-manager   2                   9c05ec903ba44       kube-controller-manager-no-preload-534766
	58643fa312719       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   14 minutes ago      Running             kube-scheduler            2                   8cf25439a775f       kube-scheduler-no-preload-534766
	6cc79ab1f0984       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Running             kube-apiserver            2                   7c8e73432a85d       kube-apiserver-no-preload-534766
	6131d95d46bd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   a28b243929a7c       etcd-no-preload-534766
	b3b4418ff9e99       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 minutes ago      Exited              kube-apiserver            1                   c7ebe613bada2       kube-apiserver-no-preload-534766
	
	
	==> coredns [64f833f4d007b1c57865048e2a12c28847749174860f85404dbb41db81394275] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b3e6bc78060dc3d235fa2f136687007c8e923f81ac9457d1754a5faa54454a7e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-534766
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-534766
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=no-preload-534766
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 21:22:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-534766
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:37:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 21:33:08 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 21:33:08 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 21:33:08 +0000   Wed, 04 Dec 2024 21:22:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 21:33:08 +0000   Wed, 04 Dec 2024 21:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.174
	  Hostname:    no-preload-534766
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d48f9a54064422eb8005869b2034bb5
	  System UUID:                3d48f9a5-4064-422e-b800-5869b2034bb5
	  Boot ID:                    80129728-9a7d-44f2-b7ef-36ede7cef093
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9llkt                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-zq88f                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-534766                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-534766             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-534766    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-z2n69                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-534766             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-24lj8              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-534766 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-534766 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-534766 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-534766 event: Registered Node no-preload-534766 in Controller
	
	
	==> dmesg <==
	[  +0.057234] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.030899] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.024714] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150754] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.068103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065168] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.212692] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.132880] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.285285] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.316815] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.058393] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.498156] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +4.584693] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 4 21:18] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 4 21:22] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.255384] systemd-fstab-generator[3121]: Ignoring "noauto" option for root device
	[  +4.585208] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.481448] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +4.857162] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.097052] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 4 21:24] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6131d95d46bd41cbaa97e7c6785d42c3edbd005b6afc99136c97d800c6f3f04e] <==
	{"level":"info","ts":"2024-12-04T21:22:40.684920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.684927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2d81e878ac6904a4 elected leader 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-12-04T21:22:40.689093Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.692054Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2d81e878ac6904a4","local-member-attributes":"{Name:no-preload-534766 ClientURLs:[https://192.168.61.174:2379]}","request-path":"/0/members/2d81e878ac6904a4/attributes","cluster-id":"98a332d8ef0073ef","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T21:22:40.692105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:40.692444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T21:22:40.693453Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:40.694231Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.174:2379"}
	{"level":"info","ts":"2024-12-04T21:22:40.694876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.694964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.695007Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T21:22:40.695269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:40.695291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T21:22:40.698901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T21:22:40.700602Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T21:32:41.102043Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-12-04T21:32:41.112382Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"9.94355ms","hash":3140892723,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-12-04T21:32:41.113498Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3140892723,"revision":683,"compact-revision":-1}
	{"level":"info","ts":"2024-12-04T21:36:52.819183Z","caller":"traceutil/trace.go:171","msg":"trace[106936393] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"374.259937ms","start":"2024-12-04T21:36:52.444874Z","end":"2024-12-04T21:36:52.819134Z","steps":["trace[106936393] 'process raft request'  (duration: 373.983896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:36:52.821412Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T21:36:52.444859Z","time spent":"375.010007ms","remote":"127.0.0.1:42790","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1130 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-04T21:36:53.048503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.830595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T21:36:53.049095Z","caller":"traceutil/trace.go:171","msg":"trace[697336510] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1131; }","duration":"153.365419ms","start":"2024-12-04T21:36:52.895633Z","end":"2024-12-04T21:36:53.048998Z","steps":["trace[697336510] 'range keys from in-memory index tree'  (duration: 152.756338ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T21:36:54.534030Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.95552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T21:36:54.534750Z","caller":"traceutil/trace.go:171","msg":"trace[1243560527] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1132; }","duration":"126.680962ms","start":"2024-12-04T21:36:54.408012Z","end":"2024-12-04T21:36:54.534693Z","steps":["trace[1243560527] 'range keys from in-memory index tree'  (duration: 125.944267ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T21:36:54.534059Z","caller":"traceutil/trace.go:171","msg":"trace[254003712] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"189.82842ms","start":"2024-12-04T21:36:54.344208Z","end":"2024-12-04T21:36:54.534036Z","steps":["trace[254003712] 'process raft request'  (duration: 124.484233ms)","trace[254003712] 'compare'  (duration: 64.883874ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:37:07 up 19 min,  0 users,  load average: 0.24, 0.20, 0.17
	Linux no-preload-534766 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6cc79ab1f098441f66f0be96a5499595f2a4be05949a69b5e1eb2ebb797a679b] <==
	E1204 21:32:43.631269       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 21:32:43.631325       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 21:32:43.632557       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:32:43.632663       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:33:43.633195       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:33:43.633361       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1204 21:33:43.633471       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:33:43.633502       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1204 21:33:43.634517       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:33:43.634635       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 21:35:43.635441       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:35:43.635655       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1204 21:35:43.635772       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 21:35:43.635816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1204 21:35:43.636798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 21:35:43.636856       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b3b4418ff9e994158450ed38887a6f43e999c88bab9970ab59f29c971431055d] <==
	W1204 21:22:32.662449       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.697477       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.707208       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.735048       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.888521       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.900112       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.910812       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.917452       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.948866       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:32.967444       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.029951       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.050607       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.092421       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.125369       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.139231       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.160049       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.247254       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.288385       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:33.367164       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:34.375157       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:35.376590       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:36.923090       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.156071       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.397993       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 21:22:37.423062       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [aad3bddff8032b4261c542d604f9fe24e2117ff0269a46d40cf665831b023c72] <==
	E1204 21:31:49.676406       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:31:50.260999       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:32:19.681900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:32:20.268940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:32:49.690156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:32:50.276346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:33:08.346052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-534766"
	E1204 21:33:19.697476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:33:20.284813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:33:49.703550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:33:50.297618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1204 21:33:57.402204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="157.668µs"
	I1204 21:34:12.398136       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="66.71µs"
	E1204 21:34:19.709263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:20.304240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:34:49.716466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:34:50.312670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:19.724324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:20.321480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:35:49.731460       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:35:50.332066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:19.737202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:20.342297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1204 21:36:49.745863       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1204 21:36:50.352896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1063f60c44f77fe8422fb7b8c58af808cd51326cadbfa1dec9788a3e7485f6f3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 21:22:52.025892       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 21:22:52.040052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.174"]
	E1204 21:22:52.040146       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 21:22:52.095497       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 21:22:52.095610       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 21:22:52.095655       1 server_linux.go:169] "Using iptables Proxier"
	I1204 21:22:52.097998       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 21:22:52.098341       1 server.go:483] "Version info" version="v1.31.2"
	I1204 21:22:52.098389       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 21:22:52.099975       1 config.go:199] "Starting service config controller"
	I1204 21:22:52.100035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 21:22:52.100097       1 config.go:105] "Starting endpoint slice config controller"
	I1204 21:22:52.100114       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 21:22:52.100655       1 config.go:328] "Starting node config controller"
	I1204 21:22:52.104535       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 21:22:52.200929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 21:22:52.201059       1 shared_informer.go:320] Caches are synced for service config
	I1204 21:22:52.205288       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [58643fa31271933e70c61aa2c5f670ae8f2a6dc3a78ad9895cdca533a42e6fb2] <==
	W1204 21:22:42.713479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:42.713693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:42.713890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:42.713993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.552124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 21:22:43.552173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.635367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 21:22:43.635417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.652424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 21:22:43.652562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.672299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 21:22:43.672446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.713971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 21:22:43.714080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.741151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 21:22:43.741409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.825359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 21:22:43.825454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.849536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 21:22:43.849660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.900262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 21:22:43.900399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 21:22:43.922958       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 21:22:43.923088       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1204 21:22:46.672653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 21:35:55 no-preload-534766 kubelet[3453]: E1204 21:35:55.604436    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348155603994917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:05 no-preload-534766 kubelet[3453]: E1204 21:36:05.605416    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348165605187278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:05 no-preload-534766 kubelet[3453]: E1204 21:36:05.605451    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348165605187278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:09 no-preload-534766 kubelet[3453]: E1204 21:36:09.383669    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:36:15 no-preload-534766 kubelet[3453]: E1204 21:36:15.608361    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348175606890457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:15 no-preload-534766 kubelet[3453]: E1204 21:36:15.608617    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348175606890457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:23 no-preload-534766 kubelet[3453]: E1204 21:36:23.384572    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:36:25 no-preload-534766 kubelet[3453]: E1204 21:36:25.610218    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348185609873843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:25 no-preload-534766 kubelet[3453]: E1204 21:36:25.610593    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348185609873843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:35 no-preload-534766 kubelet[3453]: E1204 21:36:35.612678    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348195611651523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:35 no-preload-534766 kubelet[3453]: E1204 21:36:35.613266    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348195611651523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:38 no-preload-534766 kubelet[3453]: E1204 21:36:38.384558    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]: E1204 21:36:45.406008    3453 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]: E1204 21:36:45.614794    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348205614443748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:45 no-preload-534766 kubelet[3453]: E1204 21:36:45.614835    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348205614443748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:53 no-preload-534766 kubelet[3453]: E1204 21:36:53.384346    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:36:55 no-preload-534766 kubelet[3453]: E1204 21:36:55.621662    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348215620311624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:36:55 no-preload-534766 kubelet[3453]: E1204 21:36:55.622427    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348215620311624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:04 no-preload-534766 kubelet[3453]: E1204 21:37:04.384676    3453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-24lj8" podUID="1e4467c4-301a-4820-ab89-e1f0ba78f62d"
	Dec 04 21:37:05 no-preload-534766 kubelet[3453]: E1204 21:37:05.623956    3453 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348225623659113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 21:37:05 no-preload-534766 kubelet[3453]: E1204 21:37:05.624000    3453 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348225623659113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [76b3bd9ced1a719020189538a52bb5d0e0dd96bc909668dce4de1f9559f8b177] <==
	I1204 21:22:52.040646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 21:22:52.052022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 21:22:52.052085       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 21:22:52.064452       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 21:22:52.064892       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab!
	I1204 21:22:52.067548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17dbdf22-1124-494f-b401-be5667445614", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab became leader
	I1204 21:22:52.165664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-534766_c7c58aff-5f40-4ff8-b1bf-dd8c5a8db5ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534766 -n no-preload-534766
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-534766 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-24lj8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8: exit status 1 (72.458743ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-24lj8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-534766 describe pod metrics-server-6867b74b74-24lj8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (305.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (129.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:34:38.216241   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:34:52.903233   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:35:15.225349   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:35:29.350072   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
E1204 21:35:47.024613   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.180:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.180:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (232.989183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-082859" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-082859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-082859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.765µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-082859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (222.366121ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-082859 logs -n 25: (1.532791306s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo                                  | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo find                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-272234 sudo crio                             | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-272234                                       | bridge-272234                | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:07 UTC |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:07 UTC | 04 Dec 24 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p pause-998149                                        | pause-998149                 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-455559 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:08 UTC |
	|         | disable-driver-mounts-455559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:10 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-534766             | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:08 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-566991            | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC | 04 Dec 24 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-439360  | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC | 04 Dec 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-082859        | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-534766                  | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-534766                                   | no-preload-534766            | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-566991                 | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-566991                                  | embed-certs-566991           | jenkins | v1.34.0 | 04 Dec 24 21:11 UTC | 04 Dec 24 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-082859             | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC | 04 Dec 24 21:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-082859                              | old-k8s-version-082859       | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-439360       | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-439360 | jenkins | v1.34.0 | 04 Dec 24 21:13 UTC | 04 Dec 24 21:22 UTC |
	|         | default-k8s-diff-port-439360                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 21:13:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 21:13:02.655619   75746 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:13:02.655710   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655718   75746 out.go:358] Setting ErrFile to fd 2...
	I1204 21:13:02.655723   75746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:13:02.655904   75746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:13:02.656414   75746 out.go:352] Setting JSON to false
	I1204 21:13:02.657264   75746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6933,"bootTime":1733339850,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:13:02.657344   75746 start.go:139] virtualization: kvm guest
	I1204 21:13:02.659898   75746 out.go:177] * [default-k8s-diff-port-439360] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:13:02.661012   75746 notify.go:220] Checking for updates...
	I1204 21:13:02.661028   75746 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:13:02.662162   75746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:13:02.663271   75746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:13:02.664514   75746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:13:02.665529   75746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:13:02.666701   75746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:13:02.668263   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:13:02.668646   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.668709   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.683257   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I1204 21:13:02.683722   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.684324   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.684360   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.684680   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.684851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.685048   75746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:13:02.685299   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:13:02.685328   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:13:02.699267   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1204 21:13:02.699662   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:13:02.700044   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:13:02.700063   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:13:02.700339   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:13:02.700502   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:13:02.730706   75746 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 21:13:02.731942   75746 start.go:297] selected driver: kvm2
	I1204 21:13:02.731957   75746 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.732071   75746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:13:02.732753   75746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.732853   75746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 21:13:02.748280   75746 install.go:137] /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1204 21:13:02.748697   75746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:13:02.748732   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:13:02.748788   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:13:02.748838   75746 start.go:340] cluster config:
	{Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:13:02.748971   75746 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 21:13:02.751358   75746 out.go:177] * Starting "default-k8s-diff-port-439360" primary control-plane node in "default-k8s-diff-port-439360" cluster
	I1204 21:13:03.539616   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:02.752513   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:13:02.752549   75746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 21:13:02.752560   75746 cache.go:56] Caching tarball of preloaded images
	I1204 21:13:02.752626   75746 preload.go:172] Found /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 21:13:02.752637   75746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 21:13:02.752726   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:13:02.752901   75746 start.go:360] acquireMachinesLock for default-k8s-diff-port-439360: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:13:09.623601   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:12.691589   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:18.771784   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:21.843699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:27.923631   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:30.995665   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:37.075628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:40.147824   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:46.227603   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:49.299635   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:55.379675   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:13:58.451727   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:04.531657   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:07.603570   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:13.683599   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:16.755604   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:22.835628   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:25.907600   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:31.987633   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:35.059714   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:41.139700   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:44.211695   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:50.291687   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:53.363678   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:14:59.443630   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:02.515651   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:08.595690   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:11.667672   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:17.747590   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:20.819699   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:26.899677   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:29.971649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:36.051731   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:39.123728   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:45.203625   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:48.275712   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:54.355623   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:15:57.427671   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:03.507649   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:06.579624   75012 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.174:22: connect: no route to host
	I1204 21:16:09.584575   75137 start.go:364] duration metric: took 4m27.4731498s to acquireMachinesLock for "embed-certs-566991"
	I1204 21:16:09.584639   75137 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:09.584651   75137 fix.go:54] fixHost starting: 
	I1204 21:16:09.584970   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:09.585018   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:09.600429   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1204 21:16:09.600893   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:09.601299   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:09.601322   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:09.601748   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:09.601944   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:09.602098   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:09.603776   75137 fix.go:112] recreateIfNeeded on embed-certs-566991: state=Stopped err=<nil>
	I1204 21:16:09.603821   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	W1204 21:16:09.603991   75137 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:09.605822   75137 out.go:177] * Restarting existing kvm2 VM for "embed-certs-566991" ...
	I1204 21:16:09.606942   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Start
	I1204 21:16:09.607117   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring networks are active...
	I1204 21:16:09.607926   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network default is active
	I1204 21:16:09.608276   75137 main.go:141] libmachine: (embed-certs-566991) Ensuring network mk-embed-certs-566991 is active
	I1204 21:16:09.608593   75137 main.go:141] libmachine: (embed-certs-566991) Getting domain xml...
	I1204 21:16:09.609171   75137 main.go:141] libmachine: (embed-certs-566991) Creating domain...
	I1204 21:16:10.794377   75137 main.go:141] libmachine: (embed-certs-566991) Waiting to get IP...
	I1204 21:16:10.795237   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:10.795646   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:10.795708   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:10.795615   76397 retry.go:31] will retry after 263.432891ms: waiting for machine to come up
	I1204 21:16:11.061505   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.062003   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.062025   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.061954   76397 retry.go:31] will retry after 341.684416ms: waiting for machine to come up
	I1204 21:16:11.405560   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.405994   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.406017   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.405951   76397 retry.go:31] will retry after 341.63707ms: waiting for machine to come up
	I1204 21:16:11.749439   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:11.749826   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:11.749850   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:11.749778   76397 retry.go:31] will retry after 490.222458ms: waiting for machine to come up
	I1204 21:16:09.581932   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:09.581966   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582325   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:16:09.582349   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:16:09.582554   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:16:09.584435   75012 machine.go:96] duration metric: took 4m37.423343939s to provisionDockerMachine
	I1204 21:16:09.584470   75012 fix.go:56] duration metric: took 4m37.445106567s for fixHost
	I1204 21:16:09.584480   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 4m37.445131562s
	W1204 21:16:09.584500   75012 start.go:714] error starting host: provision: host is not running
	W1204 21:16:09.584581   75012 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1204 21:16:09.584594   75012 start.go:729] Will try again in 5 seconds ...
	I1204 21:16:12.241487   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.241955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.241989   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.241914   76397 retry.go:31] will retry after 627.236105ms: waiting for machine to come up
	I1204 21:16:12.870753   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:12.871242   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:12.871274   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:12.871189   76397 retry.go:31] will retry after 948.655869ms: waiting for machine to come up
	I1204 21:16:13.821128   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:13.821501   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:13.821531   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:13.821464   76397 retry.go:31] will retry after 864.328477ms: waiting for machine to come up
	I1204 21:16:14.686831   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:14.687290   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:14.687327   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:14.687226   76397 retry.go:31] will retry after 1.040036387s: waiting for machine to come up
	I1204 21:16:15.729503   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:15.729908   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:15.729938   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:15.729856   76397 retry.go:31] will retry after 1.509456429s: waiting for machine to come up
	I1204 21:16:14.587018   75012 start.go:360] acquireMachinesLock for no-preload-534766: {Name:mkf124e8b45170ae95981b24944344de6899c5b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 21:16:17.240459   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:17.240912   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:17.240936   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:17.240859   76397 retry.go:31] will retry after 2.13583357s: waiting for machine to come up
	I1204 21:16:19.379267   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:19.379766   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:19.379792   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:19.379718   76397 retry.go:31] will retry after 2.09795045s: waiting for machine to come up
	I1204 21:16:21.478897   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:21.479356   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:21.479410   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:21.479302   76397 retry.go:31] will retry after 2.903986335s: waiting for machine to come up
	I1204 21:16:24.386386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:24.386732   75137 main.go:141] libmachine: (embed-certs-566991) DBG | unable to find current IP address of domain embed-certs-566991 in network mk-embed-certs-566991
	I1204 21:16:24.386760   75137 main.go:141] libmachine: (embed-certs-566991) DBG | I1204 21:16:24.386707   76397 retry.go:31] will retry after 2.772485684s: waiting for machine to come up
	I1204 21:16:28.395920   75464 start.go:364] duration metric: took 4m6.982305139s to acquireMachinesLock for "old-k8s-version-082859"
	I1204 21:16:28.395992   75464 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:28.396003   75464 fix.go:54] fixHost starting: 
	I1204 21:16:28.396456   75464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:28.396521   75464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:28.413833   75464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I1204 21:16:28.414263   75464 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:28.414753   75464 main.go:141] libmachine: Using API Version  1
	I1204 21:16:28.414777   75464 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:28.415165   75464 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:28.415427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:28.415603   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetState
	I1204 21:16:28.417090   75464 fix.go:112] recreateIfNeeded on old-k8s-version-082859: state=Stopped err=<nil>
	I1204 21:16:28.417125   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	W1204 21:16:28.417326   75464 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:28.419402   75464 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-082859" ...
	I1204 21:16:27.162685   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163095   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has current primary IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.163114   75137 main.go:141] libmachine: (embed-certs-566991) Found IP for machine: 192.168.39.82
	I1204 21:16:27.163126   75137 main.go:141] libmachine: (embed-certs-566991) Reserving static IP address...
	I1204 21:16:27.163613   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.163640   75137 main.go:141] libmachine: (embed-certs-566991) Reserved static IP address: 192.168.39.82
	I1204 21:16:27.163652   75137 main.go:141] libmachine: (embed-certs-566991) DBG | skip adding static IP to network mk-embed-certs-566991 - found existing host DHCP lease matching {name: "embed-certs-566991", mac: "52:54:00:98:21:6f", ip: "192.168.39.82"}
	I1204 21:16:27.163663   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Getting to WaitForSSH function...
	I1204 21:16:27.163670   75137 main.go:141] libmachine: (embed-certs-566991) Waiting for SSH to be available...
	I1204 21:16:27.165700   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166004   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.166040   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.166149   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH client type: external
	I1204 21:16:27.166173   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa (-rw-------)
	I1204 21:16:27.166209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:27.166223   75137 main.go:141] libmachine: (embed-certs-566991) DBG | About to run SSH command:
	I1204 21:16:27.166232   75137 main.go:141] libmachine: (embed-certs-566991) DBG | exit 0
	I1204 21:16:27.287234   75137 main.go:141] libmachine: (embed-certs-566991) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:27.287599   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetConfigRaw
	I1204 21:16:27.288265   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.290959   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291282   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.291308   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.291606   75137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/config.json ...
	I1204 21:16:27.291794   75137 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:27.291812   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:27.292046   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.294179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294494   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.294520   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.294637   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.294811   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.294971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.295101   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.295267   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.295461   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.295472   75137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:27.395404   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:27.395434   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395738   75137 buildroot.go:166] provisioning hostname "embed-certs-566991"
	I1204 21:16:27.395764   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.395940   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.398637   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.398982   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.399008   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.399159   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.399332   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399565   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.399702   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.399913   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.400087   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.400099   75137 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-566991 && echo "embed-certs-566991" | sudo tee /etc/hostname
	I1204 21:16:27.513921   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-566991
	
	I1204 21:16:27.513960   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.516595   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.516932   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.516955   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.517112   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.517313   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517440   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.517554   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.517671   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.517883   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.517900   75137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566991/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:27.627795   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:27.627832   75137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:27.627852   75137 buildroot.go:174] setting up certificates
	I1204 21:16:27.627861   75137 provision.go:84] configureAuth start
	I1204 21:16:27.627870   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetMachineName
	I1204 21:16:27.628196   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:27.630873   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631211   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.631236   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.631447   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.633608   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.633935   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.633954   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.634104   75137 provision.go:143] copyHostCerts
	I1204 21:16:27.634160   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:27.634171   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:27.634238   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:27.634328   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:27.634337   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:27.634359   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:27.634416   75137 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:27.634427   75137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:27.634457   75137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:27.634525   75137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566991 san=[127.0.0.1 192.168.39.82 embed-certs-566991 localhost minikube]
	I1204 21:16:27.824445   75137 provision.go:177] copyRemoteCerts
	I1204 21:16:27.824535   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:27.824576   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.827387   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827703   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.827738   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.827937   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.828104   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.828282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.828386   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:27.908710   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:27.930611   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:16:27.951287   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:16:27.971650   75137 provision.go:87] duration metric: took 343.766934ms to configureAuth
	I1204 21:16:27.971684   75137 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:27.971861   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:27.971984   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:27.974579   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.974924   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:27.974964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:27.975127   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:27.975316   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975486   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:27.975617   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:27.975771   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:27.975962   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:27.975985   75137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:28.177596   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:28.177627   75137 machine.go:96] duration metric: took 885.820166ms to provisionDockerMachine
	I1204 21:16:28.177643   75137 start.go:293] postStartSetup for "embed-certs-566991" (driver="kvm2")
	I1204 21:16:28.177657   75137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:28.177681   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.177998   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:28.178026   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.180461   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180777   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.180809   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.180936   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.181122   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.181292   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.181430   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.260618   75137 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:28.264349   75137 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:28.264371   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:28.264448   75137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:28.264543   75137 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:28.264657   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:28.272916   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:28.294517   75137 start.go:296] duration metric: took 116.858398ms for postStartSetup
	I1204 21:16:28.294564   75137 fix.go:56] duration metric: took 18.709913535s for fixHost
	I1204 21:16:28.294589   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.297320   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297628   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.297661   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.297869   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.298067   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298219   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.298346   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.298544   75137 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:28.298705   75137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1204 21:16:28.298714   75137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:28.395722   75137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733346988.368807705
	
	I1204 21:16:28.395745   75137 fix.go:216] guest clock: 1733346988.368807705
	I1204 21:16:28.395755   75137 fix.go:229] Guest: 2024-12-04 21:16:28.368807705 +0000 UTC Remote: 2024-12-04 21:16:28.294570064 +0000 UTC m=+286.315482748 (delta=74.237641ms)
	I1204 21:16:28.395781   75137 fix.go:200] guest clock delta is within tolerance: 74.237641ms
	I1204 21:16:28.395788   75137 start.go:83] releasing machines lock for "embed-certs-566991", held for 18.811169167s
	I1204 21:16:28.395828   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.396146   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:28.398895   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399273   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.399315   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.399472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.399971   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400138   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:28.400232   75137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:28.400282   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.400303   75137 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:28.400325   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:28.402965   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.402990   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403405   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403434   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403460   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:28.403475   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:28.403571   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403643   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:28.403782   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403872   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:28.403938   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404022   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:28.404173   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.404187   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:28.498689   75137 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:28.503855   75137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:28.639322   75137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:28.645881   75137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:28.645979   75137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:28.662196   75137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:28.662224   75137 start.go:495] detecting cgroup driver to use...
	I1204 21:16:28.662299   75137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:28.679458   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:28.693004   75137 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:28.693078   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:28.706303   75137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:28.719763   75137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:28.831131   75137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:28.980878   75137 docker.go:233] disabling docker service ...
	I1204 21:16:28.980952   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:28.995057   75137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:29.007885   75137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:29.140636   75137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:29.281876   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:29.297602   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:29.314375   75137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:16:29.314444   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.324326   75137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:29.324381   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.333895   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.343269   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.352608   75137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:29.363227   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.372736   75137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.389585   75137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:29.399137   75137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:29.407800   75137 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:29.407859   75137 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:29.421492   75137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:29.431191   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:29.531043   75137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:29.634995   75137 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:29.635092   75137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:29.640185   75137 start.go:563] Will wait 60s for crictl version
	I1204 21:16:29.640249   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:16:29.644117   75137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:29.683424   75137 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:29.683505   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.709015   75137 ssh_runner.go:195] Run: crio --version
	I1204 21:16:29.737931   75137 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:16:28.420626   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .Start
	I1204 21:16:28.420792   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring networks are active...
	I1204 21:16:28.421532   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network default is active
	I1204 21:16:28.421902   75464 main.go:141] libmachine: (old-k8s-version-082859) Ensuring network mk-old-k8s-version-082859 is active
	I1204 21:16:28.422289   75464 main.go:141] libmachine: (old-k8s-version-082859) Getting domain xml...
	I1204 21:16:28.422943   75464 main.go:141] libmachine: (old-k8s-version-082859) Creating domain...
	I1204 21:16:29.678419   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting to get IP...
	I1204 21:16:29.679445   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.679839   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.679884   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.679807   76539 retry.go:31] will retry after 289.179197ms: waiting for machine to come up
	I1204 21:16:29.971185   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:29.971736   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:29.971767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:29.971681   76539 retry.go:31] will retry after 303.202104ms: waiting for machine to come up
	I1204 21:16:30.277151   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.277652   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.277681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.277613   76539 retry.go:31] will retry after 410.628355ms: waiting for machine to come up
	I1204 21:16:30.690254   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:30.690792   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:30.690822   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:30.690750   76539 retry.go:31] will retry after 505.05844ms: waiting for machine to come up
	I1204 21:16:31.197454   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.197914   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.197943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.197868   76539 retry.go:31] will retry after 592.512014ms: waiting for machine to come up
	I1204 21:16:29.739276   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetIP
	I1204 21:16:29.742209   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742581   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:29.742611   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:29.742817   75137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:29.746557   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:29.757975   75137 kubeadm.go:883] updating cluster {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:29.758110   75137 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:16:29.758153   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:29.790957   75137 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:16:29.791029   75137 ssh_runner.go:195] Run: which lz4
	I1204 21:16:29.794873   75137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:29.798613   75137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:29.798642   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:16:31.060492   75137 crio.go:462] duration metric: took 1.265651412s to copy over tarball
	I1204 21:16:31.060599   75137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:31.791677   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:31.792193   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:31.792218   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:31.792126   76539 retry.go:31] will retry after 898.531247ms: waiting for machine to come up
	I1204 21:16:32.692886   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:32.693288   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:32.693309   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:32.693246   76539 retry.go:31] will retry after 832.069841ms: waiting for machine to come up
	I1204 21:16:33.526732   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:33.527291   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:33.527324   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:33.527254   76539 retry.go:31] will retry after 962.847408ms: waiting for machine to come up
	I1204 21:16:34.491553   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:34.492032   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:34.492062   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:34.491983   76539 retry.go:31] will retry after 1.207785601s: waiting for machine to come up
	I1204 21:16:35.701559   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:35.702070   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:35.702096   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:35.702031   76539 retry.go:31] will retry after 1.685825115s: waiting for machine to come up
	I1204 21:16:33.200389   75137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139761453s)
	I1204 21:16:33.200414   75137 crio.go:469] duration metric: took 2.139886465s to extract the tarball
	I1204 21:16:33.200421   75137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:33.235706   75137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:33.275780   75137 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:16:33.275803   75137 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:16:33.275811   75137 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1204 21:16:33.275916   75137 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-566991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:33.276001   75137 ssh_runner.go:195] Run: crio config
	I1204 21:16:33.330445   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:33.330470   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:33.330479   75137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:33.330502   75137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566991 NodeName:embed-certs-566991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:16:33.330663   75137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:33.330730   75137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:16:33.340505   75137 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:33.340586   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:33.349589   75137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:16:33.365156   75137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:33.380757   75137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1204 21:16:33.396851   75137 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:33.400473   75137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:33.411670   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:33.543788   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:33.564105   75137 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991 for IP: 192.168.39.82
	I1204 21:16:33.564138   75137 certs.go:194] generating shared ca certs ...
	I1204 21:16:33.564158   75137 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:33.564343   75137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:33.564425   75137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:33.564443   75137 certs.go:256] generating profile certs ...
	I1204 21:16:33.564570   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/client.key
	I1204 21:16:33.564668   75137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key.ba71006c
	I1204 21:16:33.564724   75137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key
	I1204 21:16:33.564892   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:33.564945   75137 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:33.564972   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:33.565019   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:33.565052   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:33.565087   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:33.565145   75137 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:33.566045   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:33.608433   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:33.635211   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:33.672472   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:33.701021   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1204 21:16:33.731665   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:33.756414   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:33.778799   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/embed-certs-566991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:33.801308   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:33.822986   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:33.844820   75137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:33.866558   75137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:33.881830   75137 ssh_runner.go:195] Run: openssl version
	I1204 21:16:33.887334   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:33.897261   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901411   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.901479   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:33.906997   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:33.916799   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:33.926687   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930807   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.930859   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:33.943622   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:33.958682   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:33.972391   75137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977777   75137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.977822   75137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:33.984628   75137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:33.994531   75137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:33.998695   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:34.004299   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:34.009688   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:34.015197   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:34.020625   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:34.025987   75137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:34.031435   75137 kubeadm.go:392] StartCluster: {Name:embed-certs-566991 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-566991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:34.031517   75137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:34.031567   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.067450   75137 cri.go:89] found id: ""
	I1204 21:16:34.067550   75137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:34.077454   75137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:34.077486   75137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:34.077536   75137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:34.086795   75137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:34.087776   75137 kubeconfig.go:125] found "embed-certs-566991" server: "https://192.168.39.82:8443"
	I1204 21:16:34.089769   75137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:34.098751   75137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I1204 21:16:34.098784   75137 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:34.098798   75137 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:34.098853   75137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:34.138445   75137 cri.go:89] found id: ""
	I1204 21:16:34.138523   75137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:34.155890   75137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:34.165568   75137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:34.165596   75137 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:34.165647   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:34.174688   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:34.174758   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:34.183835   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:34.192637   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:34.192690   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:34.201663   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.210254   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:34.210297   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:34.219235   75137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:34.227890   75137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:34.227972   75137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:34.236954   75137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:34.246061   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:34.352189   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.133652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.320296   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.384361   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:35.458221   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:16:35.458352   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:35.959480   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.459120   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:36.959170   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.458423   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:37.488815   75137 api_server.go:72] duration metric: took 2.030596307s to wait for apiserver process to appear ...
	I1204 21:16:37.488850   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:16:37.488875   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:37.489349   75137 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I1204 21:16:37.990012   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.696011   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.696060   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.696077   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.705288   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:16:39.705322   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:16:39.989707   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:39.993934   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:39.993959   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.489545   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.494002   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:16:40.494033   75137 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:16:40.989641   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:16:40.998171   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:16:41.006208   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:16:41.006238   75137 api_server.go:131] duration metric: took 3.517379108s to wait for apiserver health ...
	I1204 21:16:41.006250   75137 cni.go:84] Creating CNI manager for ""
	I1204 21:16:41.006259   75137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:41.008031   75137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:16:37.390104   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:37.390474   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:37.390499   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:37.390433   76539 retry.go:31] will retry after 1.755395869s: waiting for machine to come up
	I1204 21:16:39.148189   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:39.148723   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:39.148754   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:39.148694   76539 retry.go:31] will retry after 2.645343215s: waiting for machine to come up
	I1204 21:16:41.009338   75137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:16:41.026475   75137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:16:41.051888   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:16:41.064813   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:16:41.064859   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:16:41.064870   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:16:41.064880   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:16:41.064887   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:16:41.064893   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 21:16:41.064898   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:16:41.064910   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:16:41.064922   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 21:16:41.064930   75137 system_pods.go:74] duration metric: took 13.019489ms to wait for pod list to return data ...
	I1204 21:16:41.064944   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:16:41.068574   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:16:41.068607   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:16:41.068623   75137 node_conditions.go:105] duration metric: took 3.673752ms to run NodePressure ...
	I1204 21:16:41.068644   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:41.356054   75137 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:16:41.359997   75137 kubeadm.go:739] kubelet initialised
	I1204 21:16:41.360018   75137 kubeadm.go:740] duration metric: took 3.942716ms waiting for restarted kubelet to initialise ...
	I1204 21:16:41.360026   75137 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:41.365945   75137 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.370858   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370886   75137 pod_ready.go:82] duration metric: took 4.912525ms for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.370904   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.370913   75137 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.376666   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376689   75137 pod_ready.go:82] duration metric: took 5.763328ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.376698   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "etcd-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.376705   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.381261   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381285   75137 pod_ready.go:82] duration metric: took 4.57138ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.381296   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.381305   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.455155   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455195   75137 pod_ready.go:82] duration metric: took 73.873767ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.455208   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.455217   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:41.854723   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854759   75137 pod_ready.go:82] duration metric: took 399.531662ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:41.854773   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-proxy-4fv72" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:41.854782   75137 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.255217   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255242   75137 pod_ready.go:82] duration metric: took 400.451937ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.255254   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.255263   75137 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:42.655193   75137 pod_ready.go:98] node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655222   75137 pod_ready.go:82] duration metric: took 399.948182ms for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:16:42.655234   75137 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-566991" hosting pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:42.655244   75137 pod_ready.go:39] duration metric: took 1.295209634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:42.655263   75137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:16:42.666489   75137 ops.go:34] apiserver oom_adj: -16
	I1204 21:16:42.666504   75137 kubeadm.go:597] duration metric: took 8.589012522s to restartPrimaryControlPlane
	I1204 21:16:42.666512   75137 kubeadm.go:394] duration metric: took 8.635083145s to StartCluster
	I1204 21:16:42.666526   75137 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.666587   75137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:42.668175   75137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:42.668388   75137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:16:42.668451   75137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:16:42.668548   75137 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-566991"
	I1204 21:16:42.668569   75137 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-566991"
	W1204 21:16:42.668576   75137 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:16:42.668605   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.668611   75137 addons.go:69] Setting default-storageclass=true in profile "embed-certs-566991"
	I1204 21:16:42.668628   75137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566991"
	I1204 21:16:42.668661   75137 config.go:182] Loaded profile config "embed-certs-566991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:16:42.668675   75137 addons.go:69] Setting metrics-server=true in profile "embed-certs-566991"
	I1204 21:16:42.668719   75137 addons.go:234] Setting addon metrics-server=true in "embed-certs-566991"
	W1204 21:16:42.668738   75137 addons.go:243] addon metrics-server should already be in state true
	I1204 21:16:42.668796   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669094   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669037   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669158   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.669169   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.669210   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.671592   75137 out.go:177] * Verifying Kubernetes components...
	I1204 21:16:42.673134   75137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:42.684920   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1204 21:16:42.684939   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1204 21:16:42.685084   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1204 21:16:42.685298   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685386   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.685791   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685810   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.685905   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.685926   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.686119   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686297   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.686401   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.686833   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.686880   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.687004   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.687527   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.687545   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.687890   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.688475   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.688522   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.689348   75137 addons.go:234] Setting addon default-storageclass=true in "embed-certs-566991"
	W1204 21:16:42.689365   75137 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:16:42.689385   75137 host.go:66] Checking if "embed-certs-566991" exists ...
	I1204 21:16:42.689647   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.689682   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.702175   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1204 21:16:42.702672   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703170   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.703188   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.703226   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I1204 21:16:42.703537   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.703674   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.703716   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.704271   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.704295   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.704612   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.705178   75137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:42.705218   75137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:42.705552   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.707473   75137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:16:42.707479   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1204 21:16:42.707808   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.708177   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.708192   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.708551   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.708692   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:16:42.708703   75137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:16:42.708713   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.708714   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.710474   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.711964   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712040   75137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:42.712386   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.712409   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.712558   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.712726   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.712867   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.713010   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.713257   75137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:42.713268   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:16:42.713279   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.715855   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716296   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.716325   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.716472   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.716632   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.716744   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.716860   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.727365   75137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1204 21:16:42.727830   75137 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:42.728302   75137 main.go:141] libmachine: Using API Version  1
	I1204 21:16:42.728330   75137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:42.728651   75137 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:42.728838   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetState
	I1204 21:16:42.730408   75137 main.go:141] libmachine: (embed-certs-566991) Calling .DriverName
	I1204 21:16:42.730603   75137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:42.730617   75137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:16:42.730630   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHHostname
	I1204 21:16:42.733179   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733523   75137 main.go:141] libmachine: (embed-certs-566991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:6f", ip: ""} in network mk-embed-certs-566991: {Iface:virbr1 ExpiryTime:2024-12-04 22:16:20 +0000 UTC Type:0 Mac:52:54:00:98:21:6f Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:embed-certs-566991 Clientid:01:52:54:00:98:21:6f}
	I1204 21:16:42.733550   75137 main.go:141] libmachine: (embed-certs-566991) DBG | domain embed-certs-566991 has defined IP address 192.168.39.82 and MAC address 52:54:00:98:21:6f in network mk-embed-certs-566991
	I1204 21:16:42.733695   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHPort
	I1204 21:16:42.733846   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHKeyPath
	I1204 21:16:42.733991   75137 main.go:141] libmachine: (embed-certs-566991) Calling .GetSSHUsername
	I1204 21:16:42.734105   75137 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/embed-certs-566991/id_rsa Username:docker}
	I1204 21:16:42.871601   75137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:42.889651   75137 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:43.016150   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:16:43.017983   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:16:43.018006   75137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:16:43.048666   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:16:43.061060   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:16:43.061089   75137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:16:43.105294   75137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:43.105320   75137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:16:43.175330   75137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:16:44.324823   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.276121269s)
	I1204 21:16:44.324881   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324889   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308706273s)
	I1204 21:16:44.324893   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.324908   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.324922   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325213   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325264   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325289   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325272   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325297   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325304   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325302   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325381   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325409   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.325417   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.325539   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325552   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.325574   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325751   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.325792   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.325813   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.331866   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.331881   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.332102   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.332139   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.332149   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398251   75137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.222883924s)
	I1204 21:16:44.398300   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398312   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398563   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398583   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398590   75137 main.go:141] libmachine: Making call to close driver server
	I1204 21:16:44.398597   75137 main.go:141] libmachine: (embed-certs-566991) Calling .Close
	I1204 21:16:44.398606   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.398855   75137 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:16:44.398878   75137 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:16:44.398888   75137 addons.go:475] Verifying addon metrics-server=true in "embed-certs-566991"
	I1204 21:16:44.398889   75137 main.go:141] libmachine: (embed-certs-566991) DBG | Closing plugin on server side
	I1204 21:16:44.400887   75137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:16:41.796452   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:41.796909   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:41.796943   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:41.796881   76539 retry.go:31] will retry after 2.938505727s: waiting for machine to come up
	I1204 21:16:44.737247   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:44.737772   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | unable to find current IP address of domain old-k8s-version-082859 in network mk-old-k8s-version-082859
	I1204 21:16:44.737796   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | I1204 21:16:44.737726   76539 retry.go:31] will retry after 5.554286056s: waiting for machine to come up
	I1204 21:16:44.402265   75137 addons.go:510] duration metric: took 1.733822331s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:16:44.894002   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.293115   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293594   75464 main.go:141] libmachine: (old-k8s-version-082859) Found IP for machine: 192.168.72.180
	I1204 21:16:50.293638   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has current primary IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.293651   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserving static IP address...
	I1204 21:16:50.294066   75464 main.go:141] libmachine: (old-k8s-version-082859) Reserved static IP address: 192.168.72.180
	I1204 21:16:50.294102   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.294118   75464 main.go:141] libmachine: (old-k8s-version-082859) Waiting for SSH to be available...
	I1204 21:16:50.294148   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | skip adding static IP to network mk-old-k8s-version-082859 - found existing host DHCP lease matching {name: "old-k8s-version-082859", mac: "52:54:00:30:6e:ae", ip: "192.168.72.180"}
	I1204 21:16:50.294164   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Getting to WaitForSSH function...
	I1204 21:16:50.296406   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296738   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.296767   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.296893   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH client type: external
	I1204 21:16:50.296917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa (-rw-------)
	I1204 21:16:50.296949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:16:50.296966   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | About to run SSH command:
	I1204 21:16:50.296978   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | exit 0
	I1204 21:16:50.419468   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | SSH cmd err, output: <nil>: 
	I1204 21:16:50.419834   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetConfigRaw
	I1204 21:16:50.420486   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.422797   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423098   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.423123   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.423319   75464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/config.json ...
	I1204 21:16:50.423555   75464 machine.go:93] provisionDockerMachine start ...
	I1204 21:16:50.423579   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:50.423793   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.426050   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426372   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.426402   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.426520   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.426706   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.426886   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.427011   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.427208   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.427439   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.427453   75464 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:16:50.527818   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:16:50.527853   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528150   75464 buildroot.go:166] provisioning hostname "old-k8s-version-082859"
	I1204 21:16:50.528188   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.528423   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.531470   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.531920   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.531949   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.532195   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.532400   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532575   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.532733   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.532911   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.533125   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.533138   75464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-082859 && echo "old-k8s-version-082859" | sudo tee /etc/hostname
	I1204 21:16:50.653111   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-082859
	
	I1204 21:16:50.653146   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.656340   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656681   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.656715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.656946   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.657161   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657338   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.657493   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.657649   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:50.657859   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:50.657879   75464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-082859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-082859/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-082859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:16:50.772193   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:16:50.772236   75464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:16:50.772265   75464 buildroot.go:174] setting up certificates
	I1204 21:16:50.772282   75464 provision.go:84] configureAuth start
	I1204 21:16:50.772299   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetMachineName
	I1204 21:16:50.772611   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:50.775486   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.775889   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.775917   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.776053   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.778293   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778611   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.778640   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.778859   75464 provision.go:143] copyHostCerts
	I1204 21:16:50.778920   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:16:50.778934   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:16:50.778991   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:16:50.779093   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:16:50.779106   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:16:50.779134   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:16:50.779279   75464 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:16:50.779291   75464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:16:50.779317   75464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:16:50.779411   75464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-082859 san=[127.0.0.1 192.168.72.180 localhost minikube old-k8s-version-082859]
	I1204 21:16:50.991857   75464 provision.go:177] copyRemoteCerts
	I1204 21:16:50.991917   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:16:50.991939   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:50.994612   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.994999   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:50.995028   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:50.995178   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:50.995427   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:50.995587   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:50.995731   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.074162   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:16:51.097649   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 21:16:51.120589   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:16:51.143303   75464 provision.go:87] duration metric: took 371.008346ms to configureAuth
	I1204 21:16:51.143324   75464 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:16:51.143500   75464 config.go:182] Loaded profile config "old-k8s-version-082859": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:16:51.143561   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.146357   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146676   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.146715   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.146867   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.147061   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147275   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.147480   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.147672   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.147851   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.147872   75464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:16:51.587574   75746 start.go:364] duration metric: took 3m48.834641003s to acquireMachinesLock for "default-k8s-diff-port-439360"
	I1204 21:16:51.587653   75746 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:16:51.587665   75746 fix.go:54] fixHost starting: 
	I1204 21:16:51.588066   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:16:51.588117   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:16:51.604628   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1204 21:16:51.605057   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:16:51.605553   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:16:51.605580   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:16:51.605940   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:16:51.606149   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:16:51.606327   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:16:51.608008   75746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-439360: state=Stopped err=<nil>
	I1204 21:16:51.608043   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	W1204 21:16:51.608211   75746 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:16:51.609867   75746 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-439360" ...
	I1204 21:16:47.393499   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:49.893470   75137 node_ready.go:53] node "embed-certs-566991" has status "Ready":"False"
	I1204 21:16:50.393615   75137 node_ready.go:49] node "embed-certs-566991" has status "Ready":"True"
	I1204 21:16:50.393638   75137 node_ready.go:38] duration metric: took 7.503954553s for node "embed-certs-566991" to be "Ready" ...
	I1204 21:16:50.393648   75137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:16:50.398881   75137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:51.611005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Start
	I1204 21:16:51.611185   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring networks are active...
	I1204 21:16:51.612110   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network default is active
	I1204 21:16:51.612529   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Ensuring network mk-default-k8s-diff-port-439360 is active
	I1204 21:16:51.612978   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Getting domain xml...
	I1204 21:16:51.613795   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Creating domain...
	I1204 21:16:51.367959   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:16:51.367992   75464 machine.go:96] duration metric: took 944.422035ms to provisionDockerMachine
	I1204 21:16:51.368004   75464 start.go:293] postStartSetup for "old-k8s-version-082859" (driver="kvm2")
	I1204 21:16:51.368014   75464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:16:51.368030   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.368382   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:16:51.368431   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.371253   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371631   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.371667   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.371831   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.372033   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.372201   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.372338   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.449712   75464 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:16:51.453668   75464 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:16:51.453694   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:16:51.453771   75464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:16:51.453867   75464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:16:51.453995   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:16:51.463766   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:51.486114   75464 start.go:296] duration metric: took 118.097017ms for postStartSetup
	I1204 21:16:51.486162   75464 fix.go:56] duration metric: took 23.090160362s for fixHost
	I1204 21:16:51.486190   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.488901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489286   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.489317   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.489450   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.489662   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489835   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.489975   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.490137   75464 main.go:141] libmachine: Using SSH client type: native
	I1204 21:16:51.490373   75464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1204 21:16:51.490386   75464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:16:51.587355   75464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347011.543416414
	
	I1204 21:16:51.587402   75464 fix.go:216] guest clock: 1733347011.543416414
	I1204 21:16:51.587413   75464 fix.go:229] Guest: 2024-12-04 21:16:51.543416414 +0000 UTC Remote: 2024-12-04 21:16:51.486170924 +0000 UTC m=+270.217910239 (delta=57.24549ms)
	I1204 21:16:51.587442   75464 fix.go:200] guest clock delta is within tolerance: 57.24549ms
	I1204 21:16:51.587450   75464 start.go:83] releasing machines lock for "old-k8s-version-082859", held for 23.191479372s
	I1204 21:16:51.587484   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.587753   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:51.590521   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.590901   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.590933   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.591076   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591556   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591757   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .DriverName
	I1204 21:16:51.591857   75464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:16:51.591897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.592007   75464 ssh_runner.go:195] Run: cat /version.json
	I1204 21:16:51.592024   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHHostname
	I1204 21:16:51.594840   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595093   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595267   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595303   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595349   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:51.595425   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:51.595529   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595614   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHPort
	I1204 21:16:51.595714   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.595851   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.595872   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHKeyPath
	I1204 21:16:51.596038   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetSSHUsername
	I1204 21:16:51.596091   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.596192   75464 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/old-k8s-version-082859/id_rsa Username:docker}
	I1204 21:16:51.695215   75464 ssh_runner.go:195] Run: systemctl --version
	I1204 21:16:51.700624   75464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:16:51.849457   75464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:16:51.856420   75464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:16:51.856506   75464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:16:51.876202   75464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:16:51.876230   75464 start.go:495] detecting cgroup driver to use...
	I1204 21:16:51.876311   75464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:16:51.894549   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:16:51.911154   75464 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:16:51.911218   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:16:51.924220   75464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:16:51.936675   75464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:16:52.058517   75464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:16:52.224124   75464 docker.go:233] disabling docker service ...
	I1204 21:16:52.224202   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:16:52.239294   75464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:16:52.253779   75464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:16:52.384577   75464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:16:52.515024   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:16:52.529456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:16:52.551978   75464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1204 21:16:52.552043   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.563083   75464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:16:52.563165   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.573409   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.583614   75464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:16:52.594313   75464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:16:52.604389   75464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:16:52.613326   75464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:16:52.613402   75464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:16:52.627764   75464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:16:52.637330   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:52.755111   75464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:16:52.844027   75464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:16:52.844093   75464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:16:52.848602   75464 start.go:563] Will wait 60s for crictl version
	I1204 21:16:52.848676   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:52.852127   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:16:52.892934   75464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:16:52.893076   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.925376   75464 ssh_runner.go:195] Run: crio --version
	I1204 21:16:52.954480   75464 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1204 21:16:52.955897   75464 main.go:141] libmachine: (old-k8s-version-082859) Calling .GetIP
	I1204 21:16:52.958964   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959353   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:6e:ae", ip: ""} in network mk-old-k8s-version-082859: {Iface:virbr3 ExpiryTime:2024-12-04 22:16:39 +0000 UTC Type:0 Mac:52:54:00:30:6e:ae Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:old-k8s-version-082859 Clientid:01:52:54:00:30:6e:ae}
	I1204 21:16:52.959404   75464 main.go:141] libmachine: (old-k8s-version-082859) DBG | domain old-k8s-version-082859 has defined IP address 192.168.72.180 and MAC address 52:54:00:30:6e:ae in network mk-old-k8s-version-082859
	I1204 21:16:52.959641   75464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1204 21:16:52.963601   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:52.975417   75464 kubeadm.go:883] updating cluster {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:16:52.975578   75464 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 21:16:52.975644   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:53.022050   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:53.022128   75464 ssh_runner.go:195] Run: which lz4
	I1204 21:16:53.025986   75464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:16:53.029928   75464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:16:53.029962   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1204 21:16:54.579699   75464 crio.go:462] duration metric: took 1.553735037s to copy over tarball
	I1204 21:16:54.579783   75464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:16:52.406305   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:54.905969   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:56.907170   75137 pod_ready.go:103] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:52.907033   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting to get IP...
	I1204 21:16:52.908195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:52.908717   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:52.908619   76731 retry.go:31] will retry after 296.289488ms: waiting for machine to come up
	I1204 21:16:53.207388   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.207971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.208003   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.207935   76731 retry.go:31] will retry after 336.470328ms: waiting for machine to come up
	I1204 21:16:53.546821   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547399   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.547439   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.547320   76731 retry.go:31] will retry after 368.42782ms: waiting for machine to come up
	I1204 21:16:53.917796   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918528   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:53.918556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:53.918431   76731 retry.go:31] will retry after 436.479409ms: waiting for machine to come up
	I1204 21:16:54.357126   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357698   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:54.357732   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:54.357643   76731 retry.go:31] will retry after 752.80332ms: waiting for machine to come up
	I1204 21:16:55.112409   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112880   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.112907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.112827   76731 retry.go:31] will retry after 649.088241ms: waiting for machine to come up
	I1204 21:16:55.763391   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:55.763956   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:55.763859   76731 retry.go:31] will retry after 1.037502744s: waiting for machine to come up
	I1204 21:16:56.803681   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804080   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:56.804114   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:56.804035   76731 retry.go:31] will retry after 1.021780396s: waiting for machine to come up
	I1204 21:16:57.410381   75464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830568445s)
	I1204 21:16:57.410444   75464 crio.go:469] duration metric: took 2.830692434s to extract the tarball
	I1204 21:16:57.410455   75464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:16:57.452008   75464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:16:57.484771   75464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1204 21:16:57.484800   75464 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:16:57.484880   75464 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.484917   75464 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.484929   75464 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.484945   75464 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.484995   75464 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1204 21:16:57.484922   75464 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.485007   75464 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.485039   75464 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486618   75464 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.486824   75464 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.486847   75464 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.486892   75464 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:57.486905   75464 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.486828   75464 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.486944   75464 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.486829   75464 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1204 21:16:57.655649   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.656853   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.667236   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.689357   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.698439   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.726269   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1204 21:16:57.727235   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.747271   75464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1204 21:16:57.747329   75464 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.747332   75464 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1204 21:16:57.747364   75464 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.747500   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.747402   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.757217   75464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1204 21:16:57.757260   75464 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.757319   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.800711   75464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1204 21:16:57.800752   75464 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.800803   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.814692   75464 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1204 21:16:57.814738   75464 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.814789   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829660   75464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1204 21:16:57.829698   75464 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.829706   75464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1204 21:16:57.829738   75464 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1204 21:16:57.829752   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829764   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.829773   75464 ssh_runner.go:195] Run: which crictl
	I1204 21:16:57.829821   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.829877   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.829909   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.829955   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:57.929510   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:57.929559   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:57.929579   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:57.929618   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:57.940211   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:57.940309   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:57.940359   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.051710   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.067494   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.067504   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1204 21:16:58.067573   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1204 21:16:58.083777   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1204 21:16:58.083833   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1204 21:16:58.083891   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1204 21:16:58.165786   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1204 21:16:58.229739   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1204 21:16:58.229803   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1204 21:16:58.229904   75464 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1204 21:16:58.229951   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1204 21:16:58.230001   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1204 21:16:58.230045   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1204 21:16:58.261333   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1204 21:16:58.271293   75464 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1204 21:16:58.405498   75464 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:16:58.549255   75464 cache_images.go:92] duration metric: took 1.064434163s to LoadCachedImages
	W1204 21:16:58.549354   75464 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1204 21:16:58.549372   75464 kubeadm.go:934] updating node { 192.168.72.180 8443 v1.20.0 crio true true} ...
	I1204 21:16:58.549512   75464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-082859 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:16:58.549591   75464 ssh_runner.go:195] Run: crio config
	I1204 21:16:58.610182   75464 cni.go:84] Creating CNI manager for ""
	I1204 21:16:58.610209   75464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:16:58.610221   75464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:16:58.610246   75464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-082859 NodeName:old-k8s-version-082859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 21:16:58.610432   75464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-082859"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:16:58.610512   75464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 21:16:58.620337   75464 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:16:58.620421   75464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:16:58.629244   75464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1204 21:16:58.654214   75464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:16:58.671268   75464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1204 21:16:58.688068   75464 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1204 21:16:58.691513   75464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:16:58.703609   75464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:16:58.831984   75464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:16:58.850324   75464 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859 for IP: 192.168.72.180
	I1204 21:16:58.850354   75464 certs.go:194] generating shared ca certs ...
	I1204 21:16:58.850382   75464 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:58.850592   75464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:16:58.850658   75464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:16:58.850677   75464 certs.go:256] generating profile certs ...
	I1204 21:16:58.850811   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/client.key
	I1204 21:16:58.850892   75464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key.8d7b2cb2
	I1204 21:16:58.850958   75464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key
	I1204 21:16:58.851169   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:16:58.851232   75464 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:16:58.851249   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:16:58.851294   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:16:58.851343   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:16:58.851420   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:16:58.851508   75464 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:16:58.852607   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:16:58.880792   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:16:58.913556   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:16:58.943549   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:16:58.981463   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 21:16:59.012983   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 21:16:59.042980   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:16:59.077664   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/old-k8s-version-082859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:16:59.105764   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:16:59.129236   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:16:59.153845   75464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:16:59.177201   75464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:16:59.193861   75464 ssh_runner.go:195] Run: openssl version
	I1204 21:16:59.199898   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:16:59.211323   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215867   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.215922   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:16:59.221792   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:16:59.232621   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:16:59.243171   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247786   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.247847   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:16:59.253293   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:16:59.264011   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:16:59.274696   75464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279083   75464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.279142   75464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:16:59.284885   75464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:16:59.295857   75464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:16:59.300285   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:16:59.306222   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:16:59.312113   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:16:59.318289   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:16:59.323933   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:16:59.329593   75464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:16:59.336271   75464 kubeadm.go:392] StartCluster: {Name:old-k8s-version-082859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-082859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:16:59.336388   75464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:16:59.336445   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.377102   75464 cri.go:89] found id: ""
	I1204 21:16:59.377186   75464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:16:59.387322   75464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:16:59.387348   75464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:16:59.387426   75464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:16:59.397012   75464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:16:59.398490   75464 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-082859" does not appear in /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:16:59.399594   75464 kubeconfig.go:62] /home/jenkins/minikube-integration/19985-10581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-082859" cluster setting kubeconfig missing "old-k8s-version-082859" context setting]
	I1204 21:16:59.401105   75464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:16:59.519931   75464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:16:59.529805   75464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.180
	I1204 21:16:59.529848   75464 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:16:59.529862   75464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:16:59.529917   75464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:16:59.564385   75464 cri.go:89] found id: ""
	I1204 21:16:59.564455   75464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:16:59.580273   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:16:59.590510   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:16:59.590536   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:16:59.590591   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:16:59.599597   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:16:59.599665   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:16:59.609075   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:16:59.618209   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:16:59.618281   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:16:59.627558   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.636062   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:16:59.636117   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:16:59.645337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:16:59.653985   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:16:59.654027   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:16:59.662796   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:16:59.671564   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:16:59.805252   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.525460   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.762769   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.873276   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:00.988761   75464 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:00.988887   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:16:58.405630   75137 pod_ready.go:93] pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.405654   75137 pod_ready.go:82] duration metric: took 8.006745651s for pod "coredns-7c65d6cfc9-ct5xn" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.405669   75137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411605   75137 pod_ready.go:93] pod "etcd-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.411634   75137 pod_ready.go:82] duration metric: took 5.952577ms for pod "etcd-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.411646   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421660   75137 pod_ready.go:93] pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:58.421691   75137 pod_ready.go:82] duration metric: took 10.035417ms for pod "kube-apiserver-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:58.421708   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044823   75137 pod_ready.go:93] pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.044853   75137 pod_ready.go:82] duration metric: took 623.135154ms for pod "kube-controller-manager-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.044867   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051742   75137 pod_ready.go:93] pod "kube-proxy-4fv72" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.051768   75137 pod_ready.go:82] duration metric: took 6.892711ms for pod "kube-proxy-4fv72" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.051782   75137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058398   75137 pod_ready.go:93] pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace has status "Ready":"True"
	I1204 21:16:59.058429   75137 pod_ready.go:82] duration metric: took 6.638291ms for pod "kube-scheduler-embed-certs-566991" in "kube-system" namespace to be "Ready" ...
	I1204 21:16:59.058444   75137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:01.066575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:16:57.826965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:57.827566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:57.827491   76731 retry.go:31] will retry after 1.453756282s: waiting for machine to come up
	I1204 21:16:59.282497   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283001   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:16:59.283025   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:16:59.282950   76731 retry.go:31] will retry after 1.921010852s: waiting for machine to come up
	I1204 21:17:01.205877   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206359   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:01.206398   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:01.206301   76731 retry.go:31] will retry after 2.279555962s: waiting for machine to come up
	I1204 21:17:01.489204   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:01.989039   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.489053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:02.988923   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.489839   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.989130   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.489603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:04.989625   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.489951   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:05.989787   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:03.066938   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:05.565106   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:03.488557   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.488993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:03.489064   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:03.488956   76731 retry.go:31] will retry after 2.80928606s: waiting for machine to come up
	I1204 21:17:06.300625   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301069   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | unable to find current IP address of domain default-k8s-diff-port-439360 in network mk-default-k8s-diff-port-439360
	I1204 21:17:06.301096   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | I1204 21:17:06.301025   76731 retry.go:31] will retry after 4.272897585s: waiting for machine to come up
	I1204 21:17:06.489826   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:06.989767   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.489954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:07.989772   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.488905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.989834   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.489780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:09.989021   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.489348   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:10.989123   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:08.065690   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:10.566216   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.055921   75012 start.go:364] duration metric: took 57.468802465s to acquireMachinesLock for "no-preload-534766"
	I1204 21:17:12.055984   75012 start.go:96] Skipping create...Using existing machine configuration
	I1204 21:17:12.055996   75012 fix.go:54] fixHost starting: 
	I1204 21:17:12.056471   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:17:12.056520   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:17:12.074414   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I1204 21:17:12.074839   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:17:12.075295   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:17:12.075318   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:17:12.075670   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:17:12.075864   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:12.076055   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:17:12.077496   75012 fix.go:112] recreateIfNeeded on no-preload-534766: state=Stopped err=<nil>
	I1204 21:17:12.077518   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	W1204 21:17:12.077683   75012 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 21:17:12.079503   75012 out.go:177] * Restarting existing kvm2 VM for "no-preload-534766" ...
	I1204 21:17:10.578907   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579430   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Found IP for machine: 192.168.50.171
	I1204 21:17:10.579465   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserving static IP address...
	I1204 21:17:10.579482   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has current primary IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.579876   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.579899   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | skip adding static IP to network mk-default-k8s-diff-port-439360 - found existing host DHCP lease matching {name: "default-k8s-diff-port-439360", mac: "52:54:00:ec:46:31", ip: "192.168.50.171"}
	I1204 21:17:10.579913   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Reserved static IP address: 192.168.50.171
	I1204 21:17:10.579923   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Waiting for SSH to be available...
	I1204 21:17:10.579933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Getting to WaitForSSH function...
	I1204 21:17:10.582141   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582536   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.582564   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.582763   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH client type: external
	I1204 21:17:10.582808   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa (-rw-------)
	I1204 21:17:10.582840   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:10.582851   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | About to run SSH command:
	I1204 21:17:10.582859   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | exit 0
	I1204 21:17:10.707352   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:10.707801   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetConfigRaw
	I1204 21:17:10.708495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:10.710799   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711127   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.711159   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.711348   75746 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/config.json ...
	I1204 21:17:10.711562   75746 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:10.711579   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:10.711817   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.713971   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714317   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.714344   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.714495   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.714683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714811   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.714964   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.715109   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.715298   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.715311   75746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:10.823410   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:10.823443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823718   75746 buildroot.go:166] provisioning hostname "default-k8s-diff-port-439360"
	I1204 21:17:10.823741   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:10.823955   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.826607   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.826953   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.826977   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.827140   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.827331   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827533   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.827676   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.827852   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.828068   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.828084   75746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-439360 && echo "default-k8s-diff-port-439360" | sudo tee /etc/hostname
	I1204 21:17:10.948599   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-439360
	
	I1204 21:17:10.948633   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:10.951336   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951719   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:10.951765   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:10.951905   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:10.952108   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:10.952423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:10.952570   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:10.952753   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:10.952777   75746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-439360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-439360/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-439360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:11.072543   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:11.072580   75746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:11.072611   75746 buildroot.go:174] setting up certificates
	I1204 21:17:11.072620   75746 provision.go:84] configureAuth start
	I1204 21:17:11.072629   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetMachineName
	I1204 21:17:11.072933   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:11.075443   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075822   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.075868   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.075965   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.077957   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.078319   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.078449   75746 provision.go:143] copyHostCerts
	I1204 21:17:11.078506   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:11.078517   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:11.078571   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:11.078671   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:11.078681   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:11.078702   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:11.078752   75746 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:11.078759   75746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:11.078776   75746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:11.078819   75746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-439360 san=[127.0.0.1 192.168.50.171 default-k8s-diff-port-439360 localhost minikube]
	I1204 21:17:11.404256   75746 provision.go:177] copyRemoteCerts
	I1204 21:17:11.404320   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:11.404348   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.406963   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407316   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.407343   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.407542   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.407706   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.407881   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.407991   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.493691   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:11.519867   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1204 21:17:11.542295   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 21:17:11.564775   75746 provision.go:87] duration metric: took 492.141737ms to configureAuth
	I1204 21:17:11.564801   75746 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:11.564975   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:11.565063   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.567990   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568364   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.568394   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.568556   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.568780   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.568951   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.569102   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.569277   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.569476   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.569494   75746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:11.809413   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:11.809462   75746 machine.go:96] duration metric: took 1.097886094s to provisionDockerMachine
	I1204 21:17:11.809482   75746 start.go:293] postStartSetup for "default-k8s-diff-port-439360" (driver="kvm2")
	I1204 21:17:11.809493   75746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:11.809510   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:11.809913   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:11.809954   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.812724   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813137   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.813183   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.813276   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.813481   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.813659   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.813807   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:11.901984   75746 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:11.906206   75746 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:11.906243   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:11.906323   75746 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:11.906421   75746 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:11.906550   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:11.915692   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:11.938378   75746 start.go:296] duration metric: took 128.880842ms for postStartSetup
	I1204 21:17:11.938425   75746 fix.go:56] duration metric: took 20.350760099s for fixHost
	I1204 21:17:11.938449   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:11.941283   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941662   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:11.941683   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:11.941814   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:11.942015   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942207   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:11.942314   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:11.942446   75746 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:11.942630   75746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1204 21:17:11.942643   75746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:12.055721   75746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347032.018698016
	
	I1204 21:17:12.055741   75746 fix.go:216] guest clock: 1733347032.018698016
	I1204 21:17:12.055761   75746 fix.go:229] Guest: 2024-12-04 21:17:12.018698016 +0000 UTC Remote: 2024-12-04 21:17:11.938429419 +0000 UTC m=+249.319395751 (delta=80.268597ms)
	I1204 21:17:12.055787   75746 fix.go:200] guest clock delta is within tolerance: 80.268597ms
	I1204 21:17:12.055794   75746 start.go:83] releasing machines lock for "default-k8s-diff-port-439360", held for 20.468177017s
	I1204 21:17:12.055827   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.056125   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:12.058787   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059284   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.059312   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.059488   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060013   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060202   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:17:12.060290   75746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:12.060342   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.060462   75746 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:12.060489   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:17:12.063286   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063423   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063682   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.063746   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.063837   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.063938   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:12.064005   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:12.064065   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064231   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:17:12.064305   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064403   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:17:12.064563   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:17:12.064588   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.064695   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:17:12.144087   75746 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:12.168976   75746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:12.317913   75746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:12.324234   75746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:12.324327   75746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:12.344571   75746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:12.344601   75746 start.go:495] detecting cgroup driver to use...
	I1204 21:17:12.344674   75746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:12.361232   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:12.375069   75746 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:12.375139   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:12.388561   75746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:12.404338   75746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:12.527885   75746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:12.716924   75746 docker.go:233] disabling docker service ...
	I1204 21:17:12.717011   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:12.735556   75746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:12.751951   75746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:12.872456   75746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:12.997321   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:13.012576   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:13.032524   75746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:13.032590   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.042551   75746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:13.042612   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.052819   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.063234   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.074023   75746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:13.084457   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.094614   75746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.112649   75746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:13.122898   75746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:13.132312   75746 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:13.132357   75746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:13.145174   75746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:13.154748   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:13.280272   75746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:13.375481   75746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:13.375579   75746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:13.380388   75746 start.go:563] Will wait 60s for crictl version
	I1204 21:17:13.380450   75746 ssh_runner.go:195] Run: which crictl
	I1204 21:17:13.384263   75746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:13.426552   75746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:13.426644   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.464906   75746 ssh_runner.go:195] Run: crio --version
	I1204 21:17:13.493254   75746 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:11.488961   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:11.989692   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.489695   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:12.989533   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.489139   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.989580   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.488981   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:14.989089   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.489662   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:15.989301   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:13.069008   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:15.565897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:12.080766   75012 main.go:141] libmachine: (no-preload-534766) Calling .Start
	I1204 21:17:12.080951   75012 main.go:141] libmachine: (no-preload-534766) Ensuring networks are active...
	I1204 21:17:12.081751   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network default is active
	I1204 21:17:12.082112   75012 main.go:141] libmachine: (no-preload-534766) Ensuring network mk-no-preload-534766 is active
	I1204 21:17:12.082532   75012 main.go:141] libmachine: (no-preload-534766) Getting domain xml...
	I1204 21:17:12.083134   75012 main.go:141] libmachine: (no-preload-534766) Creating domain...
	I1204 21:17:13.416717   75012 main.go:141] libmachine: (no-preload-534766) Waiting to get IP...
	I1204 21:17:13.417831   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.418295   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.418381   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.418275   76934 retry.go:31] will retry after 213.310094ms: waiting for machine to come up
	I1204 21:17:13.632755   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.633250   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.633283   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.633181   76934 retry.go:31] will retry after 325.003683ms: waiting for machine to come up
	I1204 21:17:13.959863   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:13.960467   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:13.960503   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:13.960377   76934 retry.go:31] will retry after 392.851447ms: waiting for machine to come up
	I1204 21:17:14.355246   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.355720   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.355748   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.355681   76934 retry.go:31] will retry after 378.518603ms: waiting for machine to come up
	I1204 21:17:14.736283   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:14.737039   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:14.737105   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:14.737017   76934 retry.go:31] will retry after 536.132786ms: waiting for machine to come up
	I1204 21:17:15.274405   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.274929   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.274962   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.274891   76934 retry.go:31] will retry after 606.890197ms: waiting for machine to come up
	I1204 21:17:15.884088   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:15.884700   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:15.884745   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:15.884632   76934 retry.go:31] will retry after 1.088992333s: waiting for machine to come up
	I1204 21:17:16.975049   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:16.975514   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:16.975545   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:16.975458   76934 retry.go:31] will retry after 925.830658ms: waiting for machine to come up
	I1204 21:17:13.494527   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetIP
	I1204 21:17:13.498111   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498524   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:17:13.498560   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:17:13.498792   75746 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:13.503083   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:13.518900   75746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:13.519043   75746 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:13.519134   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:13.562529   75746 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:13.562643   75746 ssh_runner.go:195] Run: which lz4
	I1204 21:17:13.566970   75746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 21:17:13.571398   75746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 21:17:13.571447   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 21:17:14.863136   75746 crio.go:462] duration metric: took 1.296192361s to copy over tarball
	I1204 21:17:14.863225   75746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 21:17:17.017949   75746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154693143s)
	I1204 21:17:17.017978   75746 crio.go:469] duration metric: took 2.154810491s to extract the tarball
	I1204 21:17:17.017988   75746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 21:17:17.053935   75746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:17.099773   75746 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 21:17:17.099800   75746 cache_images.go:84] Images are preloaded, skipping loading
	I1204 21:17:17.099809   75746 kubeadm.go:934] updating node { 192.168.50.171 8444 v1.31.2 crio true true} ...
	I1204 21:17:17.099909   75746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-439360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:17.099973   75746 ssh_runner.go:195] Run: crio config
	I1204 21:17:17.145449   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:17.145481   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:17.145493   75746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:17.145525   75746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-439360 NodeName:default-k8s-diff-port-439360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:17.145689   75746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-439360"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:17.145761   75746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:17.156960   75746 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:17.157034   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:17.169101   75746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1204 21:17:17.186548   75746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:17.203582   75746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1204 21:17:17.220406   75746 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:17.224281   75746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:17.237759   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:17.368925   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:17.389017   75746 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360 for IP: 192.168.50.171
	I1204 21:17:17.389042   75746 certs.go:194] generating shared ca certs ...
	I1204 21:17:17.389062   75746 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:17.389231   75746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:17.389302   75746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:17.389314   75746 certs.go:256] generating profile certs ...
	I1204 21:17:17.389411   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/client.key
	I1204 21:17:17.389507   75746 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key.b9e485ac
	I1204 21:17:17.389583   75746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key
	I1204 21:17:17.389747   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:17.389784   75746 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:17.389793   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:17.389820   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:17.389842   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:17.389862   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:17.389899   75746 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:17.390549   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:17.427087   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:17.456331   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:17.481876   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:17.511173   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 21:17:17.535825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:17.559475   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:17.585825   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/default-k8s-diff-port-439360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:17.611495   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:17.634425   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:16.489912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:16.989712   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.489508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.989874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.489589   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:18.989133   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.489001   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.989088   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.989135   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:17.566756   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:20.064248   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:17.903583   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:17.904083   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:17.904130   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:17.904041   76934 retry.go:31] will retry after 1.281115457s: waiting for machine to come up
	I1204 21:17:19.187069   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:19.187625   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:19.187648   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:19.187594   76934 retry.go:31] will retry after 2.116897616s: waiting for machine to come up
	I1204 21:17:21.307136   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:21.307702   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:21.307738   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:21.307639   76934 retry.go:31] will retry after 1.769079667s: waiting for machine to come up
	I1204 21:17:17.658253   75746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:17.680554   75746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:17.696563   75746 ssh_runner.go:195] Run: openssl version
	I1204 21:17:17.701997   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:17.711909   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716111   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.716163   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:17.721829   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:17.732808   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:17.742766   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746881   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.746939   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:17.752221   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:17.761915   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:17.771473   75746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775476   75746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.775527   75746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:17.780671   75746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:17.790179   75746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:17.794246   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:17.799753   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:17.805228   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:17.810634   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:17.815912   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:17.821125   75746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:17.826717   75746 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-439360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-439360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:17.826802   75746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:17.826852   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.863070   75746 cri.go:89] found id: ""
	I1204 21:17:17.863157   75746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:17.872649   75746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:17.872668   75746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:17.872706   75746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:17.881981   75746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:17.883029   75746 kubeconfig.go:125] found "default-k8s-diff-port-439360" server: "https://192.168.50.171:8444"
	I1204 21:17:17.885369   75746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:17.894730   75746 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1204 21:17:17.894765   75746 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:17.894780   75746 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:17.894845   75746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:17.942493   75746 cri.go:89] found id: ""
	I1204 21:17:17.942588   75746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:17.959606   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:17.968768   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:17.968793   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:17.968850   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:17:17.977375   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:17.977437   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:17.986188   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:17:17.995409   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:17.995464   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:18.004396   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.012964   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:18.013033   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:18.021927   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:17:18.030158   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:18.030212   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:18.038704   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:18.047518   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.157472   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.779212   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:18.992111   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.080195   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:19.185206   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:19.185296   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:19.686192   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.186010   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:20.685422   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.185548   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.221082   75746 api_server.go:72] duration metric: took 2.035875276s to wait for apiserver process to appear ...
	I1204 21:17:21.221111   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:21.221130   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:21.221582   75746 api_server.go:269] stopped: https://192.168.50.171:8444/healthz: Get "https://192.168.50.171:8444/healthz": dial tcp 192.168.50.171:8444: connect: connection refused
	I1204 21:17:21.722031   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.428658   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.428710   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.428730   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.469367   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 21:17:24.469398   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 21:17:24.721854   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:24.728276   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:24.728306   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.221658   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.226223   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.226274   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:25.722014   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:25.727726   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:25.727764   75746 api_server.go:103] status: https://192.168.50.171:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:26.221331   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:17:26.226659   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:17:26.234549   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:26.234585   75746 api_server.go:131] duration metric: took 5.013466041s to wait for apiserver health ...
	I1204 21:17:26.234596   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:17:26.234605   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:26.236522   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:21.489414   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:21.989078   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.488990   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.989053   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.489867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:23.989164   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.489512   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:24.989912   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.489849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:25.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:22.066101   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:24.067073   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:26.565954   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:23.077909   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:23.078294   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:23.078332   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:23.078234   76934 retry.go:31] will retry after 2.199950593s: waiting for machine to come up
	I1204 21:17:25.280397   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:25.280766   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:25.280794   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:25.280713   76934 retry.go:31] will retry after 3.443879968s: waiting for machine to come up
	I1204 21:17:26.237773   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:26.260416   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:26.287032   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:26.301607   75746 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:26.301658   75746 system_pods.go:61] "coredns-7c65d6cfc9-8bn89" [ff71708b-97a0-44fd-8cc4-26a36e93919a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:26.301671   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [38ae5f77-f57b-4024-a2ba-1e83e08c303b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:26.301682   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [47616d96-a85b-47d8-a944-1da01cf7bef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:26.301693   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [766c13c3-3bcb-4775-80cf-608e9b207a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:26.301703   75746 system_pods.go:61] "kube-proxy-tn2xl" [8485df8b-b984-45c1-8efc-3e910028071a] Running
	I1204 21:17:26.301713   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [654e74eb-878c-4680-8b68-13bb788a781e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:26.301725   75746 system_pods.go:61] "metrics-server-6867b74b74-lbx5p" [ca850081-0045-4637-b4ac-262ad00ba6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:26.301731   75746 system_pods.go:61] "storage-provisioner" [b2c9285c-35f2-43b4-8468-17ecef9fe8fc] Running
	I1204 21:17:26.301742   75746 system_pods.go:74] duration metric: took 14.680372ms to wait for pod list to return data ...
	I1204 21:17:26.301756   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:26.305647   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:26.305680   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:26.305695   75746 node_conditions.go:105] duration metric: took 3.930691ms to run NodePressure ...
	I1204 21:17:26.305716   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:26.563972   75746 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573253   75746 kubeadm.go:739] kubelet initialised
	I1204 21:17:26.573273   75746 kubeadm.go:740] duration metric: took 9.267719ms waiting for restarted kubelet to initialise ...
	I1204 21:17:26.573281   75746 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:26.577507   75746 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:26.489765   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:26.989037   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.489507   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:27.989848   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.489237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:28.989067   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.488963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.989855   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.489905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:30.989109   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:29.065212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.065889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:28.726031   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:28.726400   75012 main.go:141] libmachine: (no-preload-534766) DBG | unable to find current IP address of domain no-preload-534766 in network mk-no-preload-534766
	I1204 21:17:28.726452   75012 main.go:141] libmachine: (no-preload-534766) DBG | I1204 21:17:28.726364   76934 retry.go:31] will retry after 3.566067517s: waiting for machine to come up
	I1204 21:17:28.585182   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:31.084886   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:32.294584   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295040   75012 main.go:141] libmachine: (no-preload-534766) Found IP for machine: 192.168.61.174
	I1204 21:17:32.295074   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has current primary IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.295086   75012 main.go:141] libmachine: (no-preload-534766) Reserving static IP address...
	I1204 21:17:32.295538   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.295572   75012 main.go:141] libmachine: (no-preload-534766) Reserved static IP address: 192.168.61.174
	I1204 21:17:32.295590   75012 main.go:141] libmachine: (no-preload-534766) DBG | skip adding static IP to network mk-no-preload-534766 - found existing host DHCP lease matching {name: "no-preload-534766", mac: "52:54:00:85:f1:d6", ip: "192.168.61.174"}
	I1204 21:17:32.295607   75012 main.go:141] libmachine: (no-preload-534766) DBG | Getting to WaitForSSH function...
	I1204 21:17:32.295621   75012 main.go:141] libmachine: (no-preload-534766) Waiting for SSH to be available...
	I1204 21:17:32.297607   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298000   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.298039   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.298174   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH client type: external
	I1204 21:17:32.298220   75012 main.go:141] libmachine: (no-preload-534766) DBG | Using SSH private key: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa (-rw-------)
	I1204 21:17:32.298259   75012 main.go:141] libmachine: (no-preload-534766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 21:17:32.298278   75012 main.go:141] libmachine: (no-preload-534766) DBG | About to run SSH command:
	I1204 21:17:32.298286   75012 main.go:141] libmachine: (no-preload-534766) DBG | exit 0
	I1204 21:17:32.423157   75012 main.go:141] libmachine: (no-preload-534766) DBG | SSH cmd err, output: <nil>: 
	I1204 21:17:32.423564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetConfigRaw
	I1204 21:17:32.424162   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.426685   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427056   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.427078   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.427325   75012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/config.json ...
	I1204 21:17:32.427589   75012 machine.go:93] provisionDockerMachine start ...
	I1204 21:17:32.427610   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:32.427837   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.430261   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430551   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.430580   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.430724   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.430893   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431039   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.431148   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.431327   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.431548   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.431564   75012 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 21:17:32.539672   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 21:17:32.539721   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.539983   75012 buildroot.go:166] provisioning hostname "no-preload-534766"
	I1204 21:17:32.540014   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.540234   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.543046   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543438   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.543488   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.543664   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.543853   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544035   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.544158   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.544331   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.544547   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.544567   75012 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-534766 && echo "no-preload-534766" | sudo tee /etc/hostname
	I1204 21:17:32.665569   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-534766
	
	I1204 21:17:32.665609   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.668482   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.668881   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.668908   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.669081   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.669297   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669479   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.669634   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.669788   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:32.669945   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:32.669961   75012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-534766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-534766/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-534766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 21:17:32.789462   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 21:17:32.789510   75012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19985-10581/.minikube CaCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19985-10581/.minikube}
	I1204 21:17:32.789535   75012 buildroot.go:174] setting up certificates
	I1204 21:17:32.789551   75012 provision.go:84] configureAuth start
	I1204 21:17:32.789568   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetMachineName
	I1204 21:17:32.789878   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:32.792564   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.792886   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.792919   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.793108   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.795197   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795534   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.795569   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.795751   75012 provision.go:143] copyHostCerts
	I1204 21:17:32.795821   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem, removing ...
	I1204 21:17:32.795835   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem
	I1204 21:17:32.795931   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/key.pem (1679 bytes)
	I1204 21:17:32.796102   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem, removing ...
	I1204 21:17:32.796118   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem
	I1204 21:17:32.796182   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/ca.pem (1078 bytes)
	I1204 21:17:32.796269   75012 exec_runner.go:144] found /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem, removing ...
	I1204 21:17:32.796278   75012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem
	I1204 21:17:32.796300   75012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19985-10581/.minikube/cert.pem (1123 bytes)
	I1204 21:17:32.796361   75012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem org=jenkins.no-preload-534766 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-534766]
	I1204 21:17:32.933050   75012 provision.go:177] copyRemoteCerts
	I1204 21:17:32.933117   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 21:17:32.933146   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:32.936027   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936384   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:32.936415   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:32.936604   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:32.936796   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:32.936952   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:32.937127   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.022226   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 21:17:33.045693   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 21:17:33.069396   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 21:17:33.094926   75012 provision.go:87] duration metric: took 305.358907ms to configureAuth
	I1204 21:17:33.094960   75012 buildroot.go:189] setting minikube options for container-runtime
	I1204 21:17:33.095150   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:17:33.095239   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.098446   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.098990   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.099019   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.099254   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.099504   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099655   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.099789   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.099921   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.100074   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.100091   75012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 21:17:33.323107   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 21:17:33.323144   75012 machine.go:96] duration metric: took 895.535234ms to provisionDockerMachine
	I1204 21:17:33.323159   75012 start.go:293] postStartSetup for "no-preload-534766" (driver="kvm2")
	I1204 21:17:33.323169   75012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 21:17:33.323185   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.323531   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 21:17:33.323564   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.326678   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327086   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.327119   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.327429   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.327661   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.327827   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.327994   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.411005   75012 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 21:17:33.415701   75012 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 21:17:33.415730   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/addons for local assets ...
	I1204 21:17:33.415806   75012 filesync.go:126] Scanning /home/jenkins/minikube-integration/19985-10581/.minikube/files for local assets ...
	I1204 21:17:33.415879   75012 filesync.go:149] local asset: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem -> 177432.pem in /etc/ssl/certs
	I1204 21:17:33.415968   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 21:17:33.425560   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:33.450288   75012 start.go:296] duration metric: took 127.116826ms for postStartSetup
	I1204 21:17:33.450330   75012 fix.go:56] duration metric: took 21.394334199s for fixHost
	I1204 21:17:33.450351   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.453067   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453416   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.453457   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.453641   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.453860   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454049   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.454228   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.454423   75012 main.go:141] libmachine: Using SSH client type: native
	I1204 21:17:33.454621   75012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1204 21:17:33.454634   75012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 21:17:33.568277   75012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733347053.524303417
	
	I1204 21:17:33.568303   75012 fix.go:216] guest clock: 1733347053.524303417
	I1204 21:17:33.568314   75012 fix.go:229] Guest: 2024-12-04 21:17:33.524303417 +0000 UTC Remote: 2024-12-04 21:17:33.450335419 +0000 UTC m=+361.455227272 (delta=73.967998ms)
	I1204 21:17:33.568360   75012 fix.go:200] guest clock delta is within tolerance: 73.967998ms
	I1204 21:17:33.568372   75012 start.go:83] releasing machines lock for "no-preload-534766", held for 21.512415434s
	I1204 21:17:33.568406   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.568691   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:33.571152   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571565   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.571594   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.571744   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572271   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572456   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:17:33.572549   75012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 21:17:33.572593   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.572689   75012 ssh_runner.go:195] Run: cat /version.json
	I1204 21:17:33.572717   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:17:33.575346   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575691   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575743   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.575773   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.575888   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576065   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576144   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:33.576173   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:33.576219   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576323   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:17:33.576391   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.576501   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:17:33.576650   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:17:33.576791   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:17:33.683451   75012 ssh_runner.go:195] Run: systemctl --version
	I1204 21:17:33.689041   75012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 21:17:33.833862   75012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 21:17:33.839637   75012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 21:17:33.839717   75012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 21:17:33.858207   75012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 21:17:33.858232   75012 start.go:495] detecting cgroup driver to use...
	I1204 21:17:33.858306   75012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 21:17:33.876794   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 21:17:33.891207   75012 docker.go:217] disabling cri-docker service (if available) ...
	I1204 21:17:33.891280   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 21:17:33.906769   75012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 21:17:33.926433   75012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 21:17:34.050681   75012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 21:17:34.229329   75012 docker.go:233] disabling docker service ...
	I1204 21:17:34.229403   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 21:17:34.243833   75012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 21:17:34.256619   75012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 21:17:34.387148   75012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 21:17:34.522221   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 21:17:34.535505   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 21:17:34.553348   75012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 21:17:34.553423   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.564532   75012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 21:17:34.564595   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.574752   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.584434   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.594161   75012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 21:17:34.604306   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.615504   75012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.633185   75012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 21:17:34.643936   75012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 21:17:34.653047   75012 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 21:17:34.653122   75012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 21:17:34.666172   75012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 21:17:34.675093   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:34.805178   75012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 21:17:34.889962   75012 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 21:17:34.890037   75012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 21:17:34.894648   75012 start.go:563] Will wait 60s for crictl version
	I1204 21:17:34.894699   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:34.898103   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 21:17:34.937886   75012 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 21:17:34.937962   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.964363   75012 ssh_runner.go:195] Run: crio --version
	I1204 21:17:34.993490   75012 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 21:17:31.489534   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:31.989033   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.489372   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:32.989005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.489869   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.989236   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.489170   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:34.989059   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.489909   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:35.989870   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:33.066070   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:35.066291   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:34.994846   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetIP
	I1204 21:17:34.998235   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.998720   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:17:34.998753   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:17:34.999035   75012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1204 21:17:35.003082   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:35.015163   75012 kubeadm.go:883] updating cluster {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 21:17:35.015286   75012 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 21:17:35.015331   75012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 21:17:35.049054   75012 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 21:17:35.049081   75012 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 21:17:35.049156   75012 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.049214   75012 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.049239   75012 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 21:17:35.049291   75012 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.049172   75012 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.049217   75012 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.049159   75012 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.049220   75012 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050579   75012 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.050648   75012 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 21:17:35.050659   75012 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.050667   75012 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.050676   75012 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.050741   75012 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.050757   75012 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:35.050874   75012 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.203766   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.211645   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.220184   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.223055   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.227332   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.232234   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1204 21:17:35.242447   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.298624   75012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1204 21:17:35.298688   75012 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.298744   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.319397   75012 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1204 21:17:35.319447   75012 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.319501   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390893   75012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1204 21:17:35.390915   75012 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1204 21:17:35.390947   75012 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.390948   75012 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.390956   75012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1204 21:17:35.390979   75012 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.391022   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.390999   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484125   75012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1204 21:17:35.484169   75012 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.484201   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.484217   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:35.484271   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.484305   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.484330   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.484396   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.591277   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.591397   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.591450   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.595733   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.595762   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.595916   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.723710   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 21:17:35.723734   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1204 21:17:35.723780   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1204 21:17:35.723829   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1204 21:17:35.723876   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.726724   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1204 21:17:35.825238   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 21:17:35.825353   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.852024   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 21:17:35.852035   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 21:17:35.852146   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:35.852173   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:35.853696   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1204 21:17:35.853769   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 21:17:35.853821   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1204 21:17:35.853832   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853856   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1204 21:17:35.853865   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:35.853776   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1204 21:17:35.853945   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:35.857231   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1204 21:17:35.858662   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1204 21:17:36.032100   75012 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:33.087169   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.087197   75746 pod_ready.go:82] duration metric: took 6.509664084s for pod "coredns-7c65d6cfc9-8bn89" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.087211   75746 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093283   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.093303   75746 pod_ready.go:82] duration metric: took 6.085079ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.093312   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600666   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:33.600693   75746 pod_ready.go:82] duration metric: took 507.373672ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:33.600709   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:35.607575   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:37.608228   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:36.489267   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:36.988973   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.489585   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.989309   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.489371   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:38.989360   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.489789   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:39.988900   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.489286   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:40.989034   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:37.564796   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:39.566599   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:38.344308   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.490341001s)
	I1204 21:17:38.344349   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1204 21:17:38.344365   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.490487312s)
	I1204 21:17:38.344390   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1204 21:17:38.344412   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344420   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.490542246s)
	I1204 21:17:38.344448   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1204 21:17:38.344455   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1204 21:17:38.344374   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2: (2.490653029s)
	I1204 21:17:38.344496   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 21:17:38.344525   75012 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.312392686s)
	I1204 21:17:38.344565   75012 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1204 21:17:38.344602   75012 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:38.344638   75012 ssh_runner.go:195] Run: which crictl
	I1204 21:17:38.344575   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:38.350960   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1204 21:17:40.219155   75012 ssh_runner.go:235] Completed: which crictl: (1.874490212s)
	I1204 21:17:40.219189   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.874713743s)
	I1204 21:17:40.219214   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1204 21:17:40.219246   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219318   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1204 21:17:40.219273   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:40.254321   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.684466   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.465119385s)
	I1204 21:17:41.684505   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1204 21:17:41.684528   75012 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684528   75012 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.430174579s)
	I1204 21:17:41.684583   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1204 21:17:41.684591   75012 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:17:41.722891   75012 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 21:17:41.723015   75012 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:39.608290   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:40.107708   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.107734   75746 pod_ready.go:82] duration metric: took 6.507016831s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.107748   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112808   75746 pod_ready.go:93] pod "kube-proxy-tn2xl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.112828   75746 pod_ready.go:82] duration metric: took 5.070603ms for pod "kube-proxy-tn2xl" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.112839   75746 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117288   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:40.117310   75746 pod_ready.go:82] duration metric: took 4.462772ms for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:40.117322   75746 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:42.124203   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:41.489491   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:41.989889   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.489098   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.988954   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.489592   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:43.989849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:44.989734   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.489097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:45.988947   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:42.065722   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:44.564691   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.565747   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:45.306832   75012 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.583796373s)
	I1204 21:17:45.306872   75012 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1204 21:17:45.306945   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.622338759s)
	I1204 21:17:45.306971   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1204 21:17:45.307000   75012 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:45.307064   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1204 21:17:44.624419   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.123760   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:46.489924   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:46.989100   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.489931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:47.988925   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.489244   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:48.989937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.489048   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.989699   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.489518   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:50.989032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.065268   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.565541   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:47.163771   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.856684542s)
	I1204 21:17:47.163798   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1204 21:17:47.163823   75012 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:47.163885   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1204 21:17:49.222699   75012 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.058784634s)
	I1204 21:17:49.222741   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1204 21:17:49.222773   75012 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.222826   75012 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1204 21:17:49.870242   75012 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19985-10581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 21:17:49.870292   75012 cache_images.go:123] Successfully loaded all cached images
	I1204 21:17:49.870302   75012 cache_images.go:92] duration metric: took 14.821207564s to LoadCachedImages
	I1204 21:17:49.870320   75012 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.2 crio true true} ...
	I1204 21:17:49.870483   75012 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-534766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 21:17:49.870571   75012 ssh_runner.go:195] Run: crio config
	I1204 21:17:49.925276   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:49.925298   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:49.925308   75012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 21:17:49.925326   75012 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-534766 NodeName:no-preload-534766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 21:17:49.925440   75012 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-534766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 21:17:49.925505   75012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 21:17:49.934691   75012 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 21:17:49.934766   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 21:17:49.942998   75012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 21:17:49.958605   75012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 21:17:49.973770   75012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1204 21:17:49.989037   75012 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1204 21:17:49.992788   75012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 21:17:50.004011   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:17:50.118056   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:17:50.136689   75012 certs.go:68] Setting up /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766 for IP: 192.168.61.174
	I1204 21:17:50.136717   75012 certs.go:194] generating shared ca certs ...
	I1204 21:17:50.136739   75012 certs.go:226] acquiring lock for ca certs: {Name:mkbcef564e8ef570ece773b833ebf1b4ab4c1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:17:50.136937   75012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key
	I1204 21:17:50.136992   75012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key
	I1204 21:17:50.137007   75012 certs.go:256] generating profile certs ...
	I1204 21:17:50.137129   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/client.key
	I1204 21:17:50.137230   75012 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key.dbe51058
	I1204 21:17:50.137275   75012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key
	I1204 21:17:50.137393   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem (1338 bytes)
	W1204 21:17:50.137422   75012 certs.go:480] ignoring /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743_empty.pem, impossibly tiny 0 bytes
	I1204 21:17:50.137433   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 21:17:50.137463   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/ca.pem (1078 bytes)
	I1204 21:17:50.137484   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/cert.pem (1123 bytes)
	I1204 21:17:50.137505   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/certs/key.pem (1679 bytes)
	I1204 21:17:50.137548   75012 certs.go:484] found cert: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem (1708 bytes)
	I1204 21:17:50.138146   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 21:17:50.168457   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 21:17:50.203050   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 21:17:50.227957   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 21:17:50.255463   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 21:17:50.283905   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 21:17:50.306300   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 21:17:50.328965   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/no-preload-534766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 21:17:50.352366   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 21:17:50.373857   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/certs/17743.pem --> /usr/share/ca-certificates/17743.pem (1338 bytes)
	I1204 21:17:50.396406   75012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/ssl/certs/177432.pem --> /usr/share/ca-certificates/177432.pem (1708 bytes)
	I1204 21:17:50.417969   75012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 21:17:50.433588   75012 ssh_runner.go:195] Run: openssl version
	I1204 21:17:50.438874   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177432.pem && ln -fs /usr/share/ca-certificates/177432.pem /etc/ssl/certs/177432.pem"
	I1204 21:17:50.448896   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453227   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:04 /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.453301   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177432.pem
	I1204 21:17:50.458793   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177432.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 21:17:50.468569   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 21:17:50.478055   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482258   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.482310   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 21:17:50.487402   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 21:17:50.500597   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17743.pem && ln -fs /usr/share/ca-certificates/17743.pem /etc/ssl/certs/17743.pem"
	I1204 21:17:50.511367   75012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516355   75012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:04 /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.516415   75012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17743.pem
	I1204 21:17:50.522233   75012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17743.pem /etc/ssl/certs/51391683.0"
	I1204 21:17:50.532163   75012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 21:17:50.536644   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 21:17:50.542343   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 21:17:50.547915   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 21:17:50.553464   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 21:17:50.559223   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 21:17:50.566119   75012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 21:17:50.571988   75012 kubeadm.go:392] StartCluster: {Name:no-preload-534766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-534766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 21:17:50.572068   75012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 21:17:50.572135   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.608793   75012 cri.go:89] found id: ""
	I1204 21:17:50.608879   75012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 21:17:50.620108   75012 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 21:17:50.620133   75012 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 21:17:50.620210   75012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 21:17:50.629506   75012 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 21:17:50.630887   75012 kubeconfig.go:125] found "no-preload-534766" server: "https://192.168.61.174:8443"
	I1204 21:17:50.633122   75012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 21:17:50.642414   75012 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I1204 21:17:50.642453   75012 kubeadm.go:1160] stopping kube-system containers ...
	I1204 21:17:50.642468   75012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1204 21:17:50.642533   75012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 21:17:50.681325   75012 cri.go:89] found id: ""
	I1204 21:17:50.681393   75012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 21:17:50.699577   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:17:50.709090   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:17:50.709108   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:17:50.709152   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:17:50.717901   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:17:50.717983   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:17:50.727175   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:17:50.735929   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:17:50.736002   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:17:50.744954   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.753257   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:17:50.753306   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:17:50.762163   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:17:50.770113   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:17:50.770163   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:17:50.778937   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:17:50.787853   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:50.902775   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.481273   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.689126   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.770117   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:51.859903   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:17:51.859993   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:49.623769   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.624431   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:51.489287   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:51.989952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.489428   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.988991   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.489424   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:53.989785   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.488957   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:54.989777   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:55.989144   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.360655   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.860583   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:52.877280   75012 api_server.go:72] duration metric: took 1.017376864s to wait for apiserver process to appear ...
	I1204 21:17:52.877337   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:17:52.877365   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.649083   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.649115   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.649144   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.655316   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.655347   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:55.877569   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:55.882206   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:55.882235   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.377778   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.385077   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 21:17:56.385106   75012 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 21:17:56.877526   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:17:56.882072   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:17:56.890468   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:17:56.890494   75012 api_server.go:131] duration metric: took 4.013149625s to wait for apiserver health ...
	I1204 21:17:56.890503   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:17:56.890509   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:17:56.892501   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:17:53.565824   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.064759   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.893859   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:17:56.903947   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:17:56.946638   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:17:56.965137   75012 system_pods.go:59] 8 kube-system pods found
	I1204 21:17:56.965182   75012 system_pods.go:61] "coredns-7c65d6cfc9-kz2h6" [cf1cadfd-b230-48e0-8b3a-e082fed911a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 21:17:56.965192   75012 system_pods.go:61] "etcd-no-preload-534766" [4150ee73-7ae8-40c0-a259-87375d6e809c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 21:17:56.965206   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [28c85f04-e634-48d2-a996-a1cb3ffb18cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 21:17:56.965215   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [237872b9-1c2a-4c3e-b26a-d2581d08c936] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 21:17:56.965223   75012 system_pods.go:61] "kube-proxy-zb946" [871adaff-d1f6-4f8a-a7db-ec3f861bd9e3] Running
	I1204 21:17:56.965232   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [b00444c4-8f8e-4c76-a74f-9a57c91cb10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 21:17:56.965240   75012 system_pods.go:61] "metrics-server-6867b74b74-wl8gw" [d7942614-93b1-4707-b471-a0dd38c96c54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:17:56.965246   75012 system_pods.go:61] "storage-provisioner" [062f6e56-6b2d-4ac4-acfd-881ff5171396] Running
	I1204 21:17:56.965254   75012 system_pods.go:74] duration metric: took 18.584748ms to wait for pod list to return data ...
	I1204 21:17:56.965269   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:17:56.969187   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:17:56.969221   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:17:56.969232   75012 node_conditions.go:105] duration metric: took 3.958803ms to run NodePressure ...
	I1204 21:17:56.969248   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 21:17:53.625414   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.123857   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:56.489461   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:56.988952   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.489626   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:57.989474   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.489775   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:58.989218   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.489030   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:17:59.989163   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.489738   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:00.989048   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:00.989130   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:01.025049   75464 cri.go:89] found id: ""
	I1204 21:18:01.025100   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.025112   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:01.025124   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:01.025188   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:01.056420   75464 cri.go:89] found id: ""
	I1204 21:18:01.056444   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.056451   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:01.056456   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:01.056512   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:01.090847   75464 cri.go:89] found id: ""
	I1204 21:18:01.090872   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.090882   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:01.090889   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:01.090948   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:01.125984   75464 cri.go:89] found id: ""
	I1204 21:18:01.126013   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.126022   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:01.126030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:01.126088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:01.160828   75464 cri.go:89] found id: ""
	I1204 21:18:01.160856   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.160866   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:01.160873   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:01.160930   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:01.192601   75464 cri.go:89] found id: ""
	I1204 21:18:01.192629   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.192641   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:01.192649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:01.192712   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:01.223093   75464 cri.go:89] found id: ""
	I1204 21:18:01.223119   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.223129   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:01.223136   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:01.223199   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:01.252668   75464 cri.go:89] found id: ""
	I1204 21:18:01.252692   75464 logs.go:282] 0 containers: []
	W1204 21:18:01.252702   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:01.252713   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:01.252733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:17:58.064895   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.065648   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:57.242821   75012 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246805   75012 kubeadm.go:739] kubelet initialised
	I1204 21:17:57.246823   75012 kubeadm.go:740] duration metric: took 3.979496ms waiting for restarted kubelet to initialise ...
	I1204 21:17:57.246831   75012 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:17:57.250966   75012 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.254870   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254889   75012 pod_ready.go:82] duration metric: took 3.903445ms for pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.254897   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "coredns-7c65d6cfc9-kz2h6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.254903   75012 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.258465   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258484   75012 pod_ready.go:82] duration metric: took 3.574981ms for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.258497   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "etcd-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.258503   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.261881   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261896   75012 pod_ready.go:82] duration metric: took 3.388572ms for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.261903   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-apiserver-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.261908   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.349579   75012 pod_ready.go:98] node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349603   75012 pod_ready.go:82] duration metric: took 87.687706ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	E1204 21:17:57.349611   75012 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-534766" hosting pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-534766" has status "Ready":"False"
	I1204 21:17:57.349617   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751064   75012 pod_ready.go:93] pod "kube-proxy-zb946" in "kube-system" namespace has status "Ready":"True"
	I1204 21:17:57.751088   75012 pod_ready.go:82] duration metric: took 401.46314ms for pod "kube-proxy-zb946" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:57.751099   75012 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:17:59.756578   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:01.759056   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:17:58.125703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:00.622314   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:02.624045   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:18:01.365301   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:01.365334   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:01.365348   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:01.440474   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:01.440503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:01.475783   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:01.475815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:01.525762   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:01.525791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.038867   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:04.050789   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:04.050856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:04.083319   75464 cri.go:89] found id: ""
	I1204 21:18:04.083345   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.083354   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:04.083360   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:04.083442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:04.119555   75464 cri.go:89] found id: ""
	I1204 21:18:04.119584   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.119595   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:04.119602   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:04.119661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:04.152499   75464 cri.go:89] found id: ""
	I1204 21:18:04.152529   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.152538   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:04.152544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:04.152592   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:04.184678   75464 cri.go:89] found id: ""
	I1204 21:18:04.184705   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.184716   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:04.184724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:04.184784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:04.220006   75464 cri.go:89] found id: ""
	I1204 21:18:04.220038   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.220050   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:04.220058   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:04.220121   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:04.254841   75464 cri.go:89] found id: ""
	I1204 21:18:04.254871   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.254880   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:04.254887   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:04.254954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:04.289126   75464 cri.go:89] found id: ""
	I1204 21:18:04.289163   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.289175   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:04.289189   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:04.289255   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:04.323036   75464 cri.go:89] found id: ""
	I1204 21:18:04.323067   75464 logs.go:282] 0 containers: []
	W1204 21:18:04.323077   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:04.323089   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:04.323103   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:04.371548   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:04.371585   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:04.384651   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:04.384681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:04.452247   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:04.452273   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:04.452288   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:04.527924   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:04.527965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:02.564676   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.566721   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:04.260269   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:06.757334   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:05.123833   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.124130   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:07.100780   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:07.113549   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:07.113617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:07.150930   75464 cri.go:89] found id: ""
	I1204 21:18:07.150964   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.150976   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:07.150984   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:07.151046   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:07.185223   75464 cri.go:89] found id: ""
	I1204 21:18:07.185254   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.185264   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:07.185271   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:07.185332   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:07.222423   75464 cri.go:89] found id: ""
	I1204 21:18:07.222449   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.222458   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:07.222463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:07.222526   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:07.258926   75464 cri.go:89] found id: ""
	I1204 21:18:07.258952   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.258960   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:07.258966   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:07.259022   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:07.292424   75464 cri.go:89] found id: ""
	I1204 21:18:07.292467   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.292478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:07.292505   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:07.292566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:07.323354   75464 cri.go:89] found id: ""
	I1204 21:18:07.323397   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.323409   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:07.323416   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:07.323462   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:07.352085   75464 cri.go:89] found id: ""
	I1204 21:18:07.352106   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.352114   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:07.352121   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:07.352177   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:07.383335   75464 cri.go:89] found id: ""
	I1204 21:18:07.383364   75464 logs.go:282] 0 containers: []
	W1204 21:18:07.383386   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:07.383397   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:07.383410   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:07.469409   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:07.469440   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.508442   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:07.508468   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:07.555103   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:07.555133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:07.568938   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:07.568965   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:07.632515   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.133153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:10.146482   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:10.146542   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:10.178660   75464 cri.go:89] found id: ""
	I1204 21:18:10.178694   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.178706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:10.178714   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:10.178768   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:10.207815   75464 cri.go:89] found id: ""
	I1204 21:18:10.207836   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.207843   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:10.207849   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:10.207893   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:10.246253   75464 cri.go:89] found id: ""
	I1204 21:18:10.246283   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.246300   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:10.246307   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:10.246371   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:10.296820   75464 cri.go:89] found id: ""
	I1204 21:18:10.296862   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.296873   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:10.296881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:10.296941   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:10.341855   75464 cri.go:89] found id: ""
	I1204 21:18:10.341885   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.341896   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:10.341904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:10.341977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:10.370283   75464 cri.go:89] found id: ""
	I1204 21:18:10.370311   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.370319   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:10.370324   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:10.370382   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:10.401149   75464 cri.go:89] found id: ""
	I1204 21:18:10.401177   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.401187   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:10.401195   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:10.401249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:10.436026   75464 cri.go:89] found id: ""
	I1204 21:18:10.436058   75464 logs.go:282] 0 containers: []
	W1204 21:18:10.436068   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:10.436082   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:10.436096   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:10.488499   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:10.488534   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:10.502316   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:10.502345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:10.577694   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:10.577727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:10.577754   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:10.657801   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:10.657835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:07.064613   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.564473   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:09.257032   75012 pod_ready.go:103] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.758214   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:18:11.758241   75012 pod_ready.go:82] duration metric: took 14.007134999s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:11.758255   75012 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	I1204 21:18:09.623451   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:11.624433   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.195044   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:13.208486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:13.208540   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:13.250608   75464 cri.go:89] found id: ""
	I1204 21:18:13.250632   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.250643   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:13.250650   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:13.250710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:13.280897   75464 cri.go:89] found id: ""
	I1204 21:18:13.280922   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.280933   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:13.280940   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:13.281047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:13.311664   75464 cri.go:89] found id: ""
	I1204 21:18:13.311686   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.311696   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:13.311702   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:13.311759   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:13.341158   75464 cri.go:89] found id: ""
	I1204 21:18:13.341187   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.341199   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:13.341206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:13.341261   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:13.371887   75464 cri.go:89] found id: ""
	I1204 21:18:13.371908   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.371915   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:13.371922   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:13.371968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:13.403036   75464 cri.go:89] found id: ""
	I1204 21:18:13.403064   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.403072   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:13.403077   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:13.403123   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:13.440657   75464 cri.go:89] found id: ""
	I1204 21:18:13.440682   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.440689   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:13.440694   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:13.440738   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:13.478384   75464 cri.go:89] found id: ""
	I1204 21:18:13.478413   75464 logs.go:282] 0 containers: []
	W1204 21:18:13.478421   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:13.478430   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:13.478442   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:13.533364   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:13.533405   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:13.546299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:13.546338   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:13.617067   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:13.617092   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:13.617108   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:13.697323   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:13.697355   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:16.235494   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:16.248551   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:16.248615   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:16.286875   75464 cri.go:89] found id: ""
	I1204 21:18:16.286904   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.286915   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:16.286922   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:16.286986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:12.064198   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.565965   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:13.764062   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:15.764749   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:14.122381   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.123985   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:16.325441   75464 cri.go:89] found id: ""
	I1204 21:18:16.325469   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.325481   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:16.325486   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:16.325544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:16.361896   75464 cri.go:89] found id: ""
	I1204 21:18:16.361919   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.361926   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:16.361932   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:16.361994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:16.394290   75464 cri.go:89] found id: ""
	I1204 21:18:16.394315   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.394322   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:16.394328   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:16.394377   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:16.429685   75464 cri.go:89] found id: ""
	I1204 21:18:16.429713   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.429724   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:16.429731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:16.429807   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:16.459942   75464 cri.go:89] found id: ""
	I1204 21:18:16.459982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.459993   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:16.460000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:16.460065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:16.488957   75464 cri.go:89] found id: ""
	I1204 21:18:16.488982   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.488992   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:16.489005   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:16.489060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:16.518311   75464 cri.go:89] found id: ""
	I1204 21:18:16.518346   75464 logs.go:282] 0 containers: []
	W1204 21:18:16.518357   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:16.518369   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:16.518382   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:16.569753   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:16.569784   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:16.583689   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:16.583721   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:16.650086   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:16.650107   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:16.650120   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:16.732000   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:16.732046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.270288   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:19.283231   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:19.283322   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:19.320680   75464 cri.go:89] found id: ""
	I1204 21:18:19.320712   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.320724   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:19.320732   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:19.320799   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:19.358318   75464 cri.go:89] found id: ""
	I1204 21:18:19.358352   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.358363   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:19.358370   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:19.358431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:19.391181   75464 cri.go:89] found id: ""
	I1204 21:18:19.391208   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.391218   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:19.391224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:19.391285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:19.422319   75464 cri.go:89] found id: ""
	I1204 21:18:19.422345   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.422355   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:19.422362   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:19.422422   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:19.452909   75464 cri.go:89] found id: ""
	I1204 21:18:19.452941   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.452952   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:19.452960   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:19.453017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:19.483548   75464 cri.go:89] found id: ""
	I1204 21:18:19.483582   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.483592   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:19.483600   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:19.483666   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:19.518776   75464 cri.go:89] found id: ""
	I1204 21:18:19.518810   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.518821   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:19.518828   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:19.518889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:19.552455   75464 cri.go:89] found id: ""
	I1204 21:18:19.552487   75464 logs.go:282] 0 containers: []
	W1204 21:18:19.552500   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:19.552513   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:19.552527   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:19.567348   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:19.567397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:19.640782   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:19.640803   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:19.640815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:19.721369   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:19.721400   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:19.765558   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:19.765590   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:17.065011   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.065236   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:21.565950   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:17.764887   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:19.766264   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:18.125223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:20.623183   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.623901   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.315311   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:22.327974   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:22.328053   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:22.361960   75464 cri.go:89] found id: ""
	I1204 21:18:22.361984   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.361995   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:22.362002   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:22.362056   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:22.393481   75464 cri.go:89] found id: ""
	I1204 21:18:22.393506   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.393514   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:22.393520   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:22.393570   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:22.424233   75464 cri.go:89] found id: ""
	I1204 21:18:22.424261   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.424273   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:22.424280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:22.424335   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:22.454307   75464 cri.go:89] found id: ""
	I1204 21:18:22.454335   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.454346   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:22.454354   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:22.454405   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:22.485880   75464 cri.go:89] found id: ""
	I1204 21:18:22.485905   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.485913   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:22.485918   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:22.485971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:22.522382   75464 cri.go:89] found id: ""
	I1204 21:18:22.522408   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.522416   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:22.522421   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:22.522475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:22.555179   75464 cri.go:89] found id: ""
	I1204 21:18:22.555202   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.555210   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:22.555215   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:22.555266   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:22.588587   75464 cri.go:89] found id: ""
	I1204 21:18:22.588608   75464 logs.go:282] 0 containers: []
	W1204 21:18:22.588615   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:22.588622   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:22.588632   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:22.640369   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:22.640393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:22.652322   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:22.652342   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:22.716150   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:22.716175   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:22.716195   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:22.792723   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:22.792749   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:25.329963   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:25.342514   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:25.342563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:25.374518   75464 cri.go:89] found id: ""
	I1204 21:18:25.374543   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.374555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:25.374562   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:25.374620   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:25.405479   75464 cri.go:89] found id: ""
	I1204 21:18:25.405520   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.405531   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:25.405538   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:25.405601   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:25.436844   75464 cri.go:89] found id: ""
	I1204 21:18:25.436867   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.436877   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:25.436884   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:25.436943   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:25.468887   75464 cri.go:89] found id: ""
	I1204 21:18:25.468910   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.468917   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:25.468923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:25.468977   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:25.504326   75464 cri.go:89] found id: ""
	I1204 21:18:25.504348   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.504355   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:25.504361   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:25.504410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:25.542531   75464 cri.go:89] found id: ""
	I1204 21:18:25.542552   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.542560   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:25.542566   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:25.542626   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:25.576293   75464 cri.go:89] found id: ""
	I1204 21:18:25.576316   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.576330   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:25.576338   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:25.576389   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:25.609662   75464 cri.go:89] found id: ""
	I1204 21:18:25.609692   75464 logs.go:282] 0 containers: []
	W1204 21:18:25.609700   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:25.609708   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:25.609724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:25.665411   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:25.665446   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:25.680149   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:25.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:25.751100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:25.751123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:25.751140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:25.838913   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:25.838952   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:24.065487   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.565568   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:22.264581   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:24.268000   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:26.764294   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:25.123981   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:27.125094   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.379209   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:28.392708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:28.392771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:28.426519   75464 cri.go:89] found id: ""
	I1204 21:18:28.426547   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.426555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:28.426561   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:28.426608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:28.459648   75464 cri.go:89] found id: ""
	I1204 21:18:28.459678   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.459689   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:28.459696   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:28.459757   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:28.489982   75464 cri.go:89] found id: ""
	I1204 21:18:28.490010   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.490021   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:28.490029   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:28.490101   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:28.525203   75464 cri.go:89] found id: ""
	I1204 21:18:28.525228   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.525235   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:28.525240   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:28.525285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:28.554808   75464 cri.go:89] found id: ""
	I1204 21:18:28.554836   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.554845   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:28.554850   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:28.554911   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:28.586406   75464 cri.go:89] found id: ""
	I1204 21:18:28.586427   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.586434   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:28.586441   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:28.586484   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:28.622419   75464 cri.go:89] found id: ""
	I1204 21:18:28.622444   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.622455   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:28.622462   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:28.622520   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:28.651604   75464 cri.go:89] found id: ""
	I1204 21:18:28.651625   75464 logs.go:282] 0 containers: []
	W1204 21:18:28.651632   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:28.651639   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:28.651654   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:28.714430   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:28.714458   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:28.714473   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:28.791444   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:28.791472   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:28.827808   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:28.827831   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:28.875308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:28.875336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:28.566277   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.566465   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:28.765108   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:30.765282   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:29.624139   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.624944   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:31.388578   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:31.401539   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:31.401598   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:31.443462   75464 cri.go:89] found id: ""
	I1204 21:18:31.443496   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.443504   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:31.443509   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:31.443557   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:31.482522   75464 cri.go:89] found id: ""
	I1204 21:18:31.482548   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.482559   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:31.482568   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:31.482623   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:31.520579   75464 cri.go:89] found id: ""
	I1204 21:18:31.520609   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.520618   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:31.520624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:31.520684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:31.559637   75464 cri.go:89] found id: ""
	I1204 21:18:31.559683   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.559692   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:31.559699   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:31.559761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:31.592633   75464 cri.go:89] found id: ""
	I1204 21:18:31.592665   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.592677   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:31.592685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:31.592748   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:31.627002   75464 cri.go:89] found id: ""
	I1204 21:18:31.627022   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.627029   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:31.627035   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:31.627083   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:31.663333   75464 cri.go:89] found id: ""
	I1204 21:18:31.663380   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.663392   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:31.663400   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:31.663465   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:31.697813   75464 cri.go:89] found id: ""
	I1204 21:18:31.697848   75464 logs.go:282] 0 containers: []
	W1204 21:18:31.697860   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:31.697869   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:31.697882   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:31.747666   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:31.747701   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:31.761371   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:31.761402   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:31.831098   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:31.831123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:31.831143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:31.912161   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:31.912199   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.450322   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:34.463442   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:34.463503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:34.497333   75464 cri.go:89] found id: ""
	I1204 21:18:34.497363   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.497371   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:34.497377   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:34.497449   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:34.531057   75464 cri.go:89] found id: ""
	I1204 21:18:34.531093   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.531105   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:34.531113   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:34.531180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:34.566899   75464 cri.go:89] found id: ""
	I1204 21:18:34.566926   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.566934   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:34.566940   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:34.566989   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:34.600393   75464 cri.go:89] found id: ""
	I1204 21:18:34.600422   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.600430   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:34.600436   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:34.600503   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:34.636027   75464 cri.go:89] found id: ""
	I1204 21:18:34.636060   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.636072   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:34.636082   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:34.636159   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:34.670624   75464 cri.go:89] found id: ""
	I1204 21:18:34.670650   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.670658   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:34.670666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:34.670727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:34.702209   75464 cri.go:89] found id: ""
	I1204 21:18:34.702241   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.702253   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:34.702261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:34.702330   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:34.733135   75464 cri.go:89] found id: ""
	I1204 21:18:34.733156   75464 logs.go:282] 0 containers: []
	W1204 21:18:34.733174   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:34.733191   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:34.733207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:34.768969   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:34.768993   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:34.816493   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:34.816531   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:34.829450   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:34.829476   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:34.897968   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:34.898000   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:34.898018   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:32.566614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.064944   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.264871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:35.265285   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:33.625223   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:36.123006   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.477937   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:37.491778   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:37.491856   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:37.529962   75464 cri.go:89] found id: ""
	I1204 21:18:37.529995   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.530005   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:37.530013   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:37.530081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:37.564769   75464 cri.go:89] found id: ""
	I1204 21:18:37.564794   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.564805   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:37.564813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:37.564879   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:37.601680   75464 cri.go:89] found id: ""
	I1204 21:18:37.601708   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.601720   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:37.601726   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:37.601796   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:37.637221   75464 cri.go:89] found id: ""
	I1204 21:18:37.637247   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.637255   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:37.637261   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:37.637326   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:37.673103   75464 cri.go:89] found id: ""
	I1204 21:18:37.673127   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.673135   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:37.673140   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:37.673200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:37.710108   75464 cri.go:89] found id: ""
	I1204 21:18:37.710134   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.710147   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:37.710154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:37.710216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:37.741506   75464 cri.go:89] found id: ""
	I1204 21:18:37.741530   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.741538   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:37.741544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:37.741596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:37.775320   75464 cri.go:89] found id: ""
	I1204 21:18:37.775343   75464 logs.go:282] 0 containers: []
	W1204 21:18:37.775350   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:37.775358   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:37.775389   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:37.839591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:37.839610   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:37.839633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:37.915174   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:37.915216   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.958900   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:37.958930   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:38.010383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:38.010418   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.525306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:40.537648   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:40.537706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:40.573932   75464 cri.go:89] found id: ""
	I1204 21:18:40.573962   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.573973   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:40.573980   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:40.574041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:40.603917   75464 cri.go:89] found id: ""
	I1204 21:18:40.603943   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.603952   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:40.603961   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:40.604018   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:40.636601   75464 cri.go:89] found id: ""
	I1204 21:18:40.636630   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.636641   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:40.636649   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:40.636710   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:40.673040   75464 cri.go:89] found id: ""
	I1204 21:18:40.673073   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.673085   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:40.673093   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:40.673158   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:40.705330   75464 cri.go:89] found id: ""
	I1204 21:18:40.705357   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.705364   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:40.705371   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:40.705434   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:40.738099   75464 cri.go:89] found id: ""
	I1204 21:18:40.738123   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.738130   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:40.738137   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:40.738184   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:40.770558   75464 cri.go:89] found id: ""
	I1204 21:18:40.770583   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.770590   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:40.770596   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:40.770656   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:40.803461   75464 cri.go:89] found id: ""
	I1204 21:18:40.803489   75464 logs.go:282] 0 containers: []
	W1204 21:18:40.803501   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:40.803512   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:40.803529   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:40.852684   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:40.852726   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:40.865768   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:40.865795   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:40.932542   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:40.932569   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:40.932587   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:41.013378   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:41.013419   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:37.065100   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.565212   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:41.566163   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:37.765520   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:39.768005   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:38.623095   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:40.623359   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.552845   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:43.567081   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:43.567149   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:43.600562   75464 cri.go:89] found id: ""
	I1204 21:18:43.600595   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.600605   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:43.600618   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:43.600683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:43.638922   75464 cri.go:89] found id: ""
	I1204 21:18:43.638955   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.638965   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:43.638972   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:43.639037   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:43.674473   75464 cri.go:89] found id: ""
	I1204 21:18:43.674501   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.674509   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:43.674516   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:43.674569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:43.721312   75464 cri.go:89] found id: ""
	I1204 21:18:43.721339   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.721350   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:43.721357   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:43.721420   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:43.760113   75464 cri.go:89] found id: ""
	I1204 21:18:43.760150   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.760161   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:43.760169   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:43.760233   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:43.794383   75464 cri.go:89] found id: ""
	I1204 21:18:43.794410   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.794418   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:43.794423   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:43.794475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:43.826611   75464 cri.go:89] found id: ""
	I1204 21:18:43.826646   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.826657   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:43.826666   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:43.826728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:43.859459   75464 cri.go:89] found id: ""
	I1204 21:18:43.859489   75464 logs.go:282] 0 containers: []
	W1204 21:18:43.859496   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:43.859505   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:43.859518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:43.871740   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:43.871762   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:43.940838   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:43.940862   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:43.940874   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:44.018931   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:44.018967   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:44.054754   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:44.054786   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:44.066258   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.565764   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:42.264400   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:44.765338   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:43.124128   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:45.624394   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:46.614407   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:46.627953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:46.628009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:46.662223   75464 cri.go:89] found id: ""
	I1204 21:18:46.662254   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.662263   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:46.662268   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:46.662333   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:46.695931   75464 cri.go:89] found id: ""
	I1204 21:18:46.695955   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.695963   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:46.695969   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:46.696014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:46.728731   75464 cri.go:89] found id: ""
	I1204 21:18:46.728761   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.728773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:46.728780   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:46.728841   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:46.762466   75464 cri.go:89] found id: ""
	I1204 21:18:46.762491   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.762499   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:46.762544   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:46.762613   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:46.797253   75464 cri.go:89] found id: ""
	I1204 21:18:46.797279   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.797288   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:46.797295   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:46.797357   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:46.833757   75464 cri.go:89] found id: ""
	I1204 21:18:46.833783   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.833790   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:46.833797   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:46.833845   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:46.865105   75464 cri.go:89] found id: ""
	I1204 21:18:46.865135   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.865147   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:46.865154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:46.865212   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:46.896358   75464 cri.go:89] found id: ""
	I1204 21:18:46.896385   75464 logs.go:282] 0 containers: []
	W1204 21:18:46.896397   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:46.896408   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:46.896426   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:46.932507   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:46.932536   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:46.985490   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:46.985517   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:46.999509   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:46.999538   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:47.075096   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:47.075119   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:47.075133   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:49.654450   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:49.667708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:49.667761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:49.699864   75464 cri.go:89] found id: ""
	I1204 21:18:49.699885   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.699894   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:49.699902   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:49.699954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:49.732972   75464 cri.go:89] found id: ""
	I1204 21:18:49.732996   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.733004   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:49.733009   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:49.733055   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:49.765103   75464 cri.go:89] found id: ""
	I1204 21:18:49.765124   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.765135   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:49.765142   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:49.765208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:49.796309   75464 cri.go:89] found id: ""
	I1204 21:18:49.796330   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.796337   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:49.796343   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:49.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:49.826818   75464 cri.go:89] found id: ""
	I1204 21:18:49.826844   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.826855   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:49.826863   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:49.826921   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:49.879437   75464 cri.go:89] found id: ""
	I1204 21:18:49.879463   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.879471   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:49.879477   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:49.879525   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:49.910837   75464 cri.go:89] found id: ""
	I1204 21:18:49.910862   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.910872   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:49.910878   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:49.910937   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:49.941894   75464 cri.go:89] found id: ""
	I1204 21:18:49.941918   75464 logs.go:282] 0 containers: []
	W1204 21:18:49.941927   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:49.941937   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:49.941950   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:49.994300   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:49.994339   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:50.008171   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:50.008207   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:50.083770   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:50.083799   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:50.083815   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:50.161338   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:50.161371   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:49.064407   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:51.066565   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:47.264889   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:49.764731   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:48.123660   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:50.125339   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.624437   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.699023   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:52.711524   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:52.711599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:52.744668   75464 cri.go:89] found id: ""
	I1204 21:18:52.744703   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.744715   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:52.744724   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:52.744794   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:52.780504   75464 cri.go:89] found id: ""
	I1204 21:18:52.780529   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.780537   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:52.780546   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:52.780596   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:52.811678   75464 cri.go:89] found id: ""
	I1204 21:18:52.811704   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.811721   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:52.811749   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:52.811815   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:52.849178   75464 cri.go:89] found id: ""
	I1204 21:18:52.849205   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.849216   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:52.849223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:52.849285   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:52.881715   75464 cri.go:89] found id: ""
	I1204 21:18:52.881740   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.881748   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:52.881753   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:52.881801   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:52.912463   75464 cri.go:89] found id: ""
	I1204 21:18:52.912484   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.912493   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:52.912498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:52.912541   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:52.941846   75464 cri.go:89] found id: ""
	I1204 21:18:52.941867   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.941874   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:52.941879   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:52.941933   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:52.972043   75464 cri.go:89] found id: ""
	I1204 21:18:52.972067   75464 logs.go:282] 0 containers: []
	W1204 21:18:52.972075   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:52.972083   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:52.972092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:53.022049   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:53.022078   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:53.034971   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:53.034998   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:53.105058   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:53.105080   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:53.105092   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.185050   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:53.185086   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:55.724189   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:55.737378   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:55.737439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:55.772286   75464 cri.go:89] found id: ""
	I1204 21:18:55.772311   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.772319   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:55.772324   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:55.772375   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:55.805040   75464 cri.go:89] found id: ""
	I1204 21:18:55.805061   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.805070   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:55.805075   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:55.805124   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:55.836500   75464 cri.go:89] found id: ""
	I1204 21:18:55.836528   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.836539   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:55.836553   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:55.836624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:55.869715   75464 cri.go:89] found id: ""
	I1204 21:18:55.869740   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.869749   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:55.869754   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:55.869810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:55.901596   75464 cri.go:89] found id: ""
	I1204 21:18:55.901623   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.901634   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:55.901641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:55.901705   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:55.931865   75464 cri.go:89] found id: ""
	I1204 21:18:55.931890   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.931900   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:55.931907   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:55.931971   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:55.962990   75464 cri.go:89] found id: ""
	I1204 21:18:55.963016   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.963025   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:55.963030   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:55.963081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:55.992110   75464 cri.go:89] found id: ""
	I1204 21:18:55.992132   75464 logs.go:282] 0 containers: []
	W1204 21:18:55.992141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:55.992149   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:55.992159   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:56.027234   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:56.027271   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:56.080250   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:56.080300   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:56.095943   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:56.095972   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:56.166704   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:56.166732   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:56.166744   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:53.565002   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:55.565734   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:52.264986   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.764517   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:54.624734   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.123337   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:58.745119   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:18:58.758304   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:18:58.758365   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:18:58.797221   75464 cri.go:89] found id: ""
	I1204 21:18:58.797245   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.797256   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:18:58.797264   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:18:58.797325   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:18:58.833333   75464 cri.go:89] found id: ""
	I1204 21:18:58.833358   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.833368   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:18:58.833374   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:18:58.833431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:18:58.867765   75464 cri.go:89] found id: ""
	I1204 21:18:58.867790   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.867802   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:18:58.867810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:18:58.867874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:18:58.900290   75464 cri.go:89] found id: ""
	I1204 21:18:58.900326   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.900335   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:18:58.900386   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:18:58.900441   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:18:58.934627   75464 cri.go:89] found id: ""
	I1204 21:18:58.934660   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.934672   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:18:58.934679   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:18:58.934743   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:18:58.967410   75464 cri.go:89] found id: ""
	I1204 21:18:58.967442   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.967455   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:18:58.967463   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:18:58.967534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:18:58.997635   75464 cri.go:89] found id: ""
	I1204 21:18:58.997665   75464 logs.go:282] 0 containers: []
	W1204 21:18:58.997678   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:18:58.997685   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:18:58.997742   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:18:59.032135   75464 cri.go:89] found id: ""
	I1204 21:18:59.032162   75464 logs.go:282] 0 containers: []
	W1204 21:18:59.032181   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:18:59.032190   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:18:59.032214   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:18:59.101453   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:18:59.101477   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:18:59.101490   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:18:59.182218   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:18:59.182266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:18:59.218062   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:18:59.218088   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:18:59.269536   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:18:59.269567   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:18:58.063715   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:00.565067   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:57.264306   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.266030   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.765163   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:18:59.124120   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.623069   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:01.784237   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:01.797810   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:01.797888   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:01.833235   75464 cri.go:89] found id: ""
	I1204 21:19:01.833267   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.833279   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:01.833287   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:01.833345   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:01.866869   75464 cri.go:89] found id: ""
	I1204 21:19:01.866898   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.866906   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:01.866912   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:01.866962   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:01.905512   75464 cri.go:89] found id: ""
	I1204 21:19:01.905539   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.905547   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:01.905552   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:01.905608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:01.940519   75464 cri.go:89] found id: ""
	I1204 21:19:01.940540   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.940548   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:01.940554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:01.940599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:01.968900   75464 cri.go:89] found id: ""
	I1204 21:19:01.968922   75464 logs.go:282] 0 containers: []
	W1204 21:19:01.968931   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:01.968938   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:01.968986   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:02.011007   75464 cri.go:89] found id: ""
	I1204 21:19:02.011032   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.011039   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:02.011045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:02.011097   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:02.069395   75464 cri.go:89] found id: ""
	I1204 21:19:02.069422   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.069432   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:02.069438   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:02.069483   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:02.116103   75464 cri.go:89] found id: ""
	I1204 21:19:02.116129   75464 logs.go:282] 0 containers: []
	W1204 21:19:02.116141   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:02.116151   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:02.116162   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:02.152582   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:02.152617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:02.207765   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:02.207796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:02.221923   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:02.221946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:02.286568   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:02.286593   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:02.286608   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:04.861905   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:04.875045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:04.875106   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:04.907565   75464 cri.go:89] found id: ""
	I1204 21:19:04.907591   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.907601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:04.907609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:04.907667   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:04.937783   75464 cri.go:89] found id: ""
	I1204 21:19:04.937801   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.937808   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:04.937813   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:04.937855   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:04.974668   75464 cri.go:89] found id: ""
	I1204 21:19:04.974695   75464 logs.go:282] 0 containers: []
	W1204 21:19:04.974703   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:04.974708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:04.974764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:05.008970   75464 cri.go:89] found id: ""
	I1204 21:19:05.008996   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.009008   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:05.009016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:05.009078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:05.044719   75464 cri.go:89] found id: ""
	I1204 21:19:05.044748   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.044757   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:05.044765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:05.044834   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:05.082492   75464 cri.go:89] found id: ""
	I1204 21:19:05.082518   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.082527   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:05.082533   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:05.082594   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:05.115540   75464 cri.go:89] found id: ""
	I1204 21:19:05.115569   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.115578   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:05.115584   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:05.115643   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:05.150064   75464 cri.go:89] found id: ""
	I1204 21:19:05.150088   75464 logs.go:282] 0 containers: []
	W1204 21:19:05.150096   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:05.150104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:05.150116   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:05.220591   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:05.220619   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:05.220635   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:05.298237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:05.298269   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:05.337286   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:05.337312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:05.394282   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:05.394313   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:03.064580   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:05.065897   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:04.263946   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.264605   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:03.624413   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:06.124113   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:07.907153   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:07.923906   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:07.923967   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:07.969672   75464 cri.go:89] found id: ""
	I1204 21:19:07.969698   75464 logs.go:282] 0 containers: []
	W1204 21:19:07.969706   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:07.969712   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:07.969761   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:08.019452   75464 cri.go:89] found id: ""
	I1204 21:19:08.019488   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.019496   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:08.019502   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:08.019551   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:08.064730   75464 cri.go:89] found id: ""
	I1204 21:19:08.064757   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.064766   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:08.064771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:08.064822   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:08.097390   75464 cri.go:89] found id: ""
	I1204 21:19:08.097415   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.097424   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:08.097430   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:08.097481   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:08.134612   75464 cri.go:89] found id: ""
	I1204 21:19:08.134640   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.134649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:08.134655   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:08.134706   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:08.167328   75464 cri.go:89] found id: ""
	I1204 21:19:08.167355   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.167363   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:08.167380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:08.167447   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:08.196379   75464 cri.go:89] found id: ""
	I1204 21:19:08.196401   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.196411   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:08.196419   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:08.196475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:08.227953   75464 cri.go:89] found id: ""
	I1204 21:19:08.227983   75464 logs.go:282] 0 containers: []
	W1204 21:19:08.227994   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:08.228007   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:08.228021   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:08.304644   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:08.304672   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:08.340803   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:08.340835   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:08.392000   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:08.392034   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:08.405498   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:08.405533   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:08.472505   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:10.972755   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:10.986250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:10.986316   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:11.020562   75464 cri.go:89] found id: ""
	I1204 21:19:11.020590   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.020601   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:11.020609   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:11.020671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:11.052966   75464 cri.go:89] found id: ""
	I1204 21:19:11.052989   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.052999   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:11.053006   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:11.053062   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:11.085999   75464 cri.go:89] found id: ""
	I1204 21:19:11.086025   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.086032   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:11.086038   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:11.086085   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:11.125104   75464 cri.go:89] found id: ""
	I1204 21:19:11.125134   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.125145   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:11.125152   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:11.125207   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:11.161373   75464 cri.go:89] found id: ""
	I1204 21:19:11.161406   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.161418   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:11.161426   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:11.161487   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:11.192514   75464 cri.go:89] found id: ""
	I1204 21:19:11.192541   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.192552   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:11.192559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:11.192617   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:11.225497   75464 cri.go:89] found id: ""
	I1204 21:19:11.225514   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.225522   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:11.225528   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:11.225573   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:11.258695   75464 cri.go:89] found id: ""
	I1204 21:19:11.258718   75464 logs.go:282] 0 containers: []
	W1204 21:19:11.258730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:11.258740   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:11.258753   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:11.292427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:11.292456   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:07.565769   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.064738   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.264914   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.765337   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:08.125281   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:10.623449   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:11.346115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:11.346143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:11.360086   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:11.360110   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:11.430194   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:11.430216   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:11.430228   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.011320   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:14.024214   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:14.024281   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:14.060155   75464 cri.go:89] found id: ""
	I1204 21:19:14.060184   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.060196   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:14.060204   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:14.060269   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:14.095483   75464 cri.go:89] found id: ""
	I1204 21:19:14.095524   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.095536   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:14.095544   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:14.095621   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:14.130533   75464 cri.go:89] found id: ""
	I1204 21:19:14.130565   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.130573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:14.130579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:14.130650   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:14.167349   75464 cri.go:89] found id: ""
	I1204 21:19:14.167386   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.167397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:14.167405   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:14.167477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:14.200197   75464 cri.go:89] found id: ""
	I1204 21:19:14.200229   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.200240   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:14.200247   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:14.200315   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:14.233664   75464 cri.go:89] found id: ""
	I1204 21:19:14.233696   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.233707   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:14.233715   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:14.233779   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:14.268193   75464 cri.go:89] found id: ""
	I1204 21:19:14.268232   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.268243   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:14.268250   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:14.268311   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:14.305771   75464 cri.go:89] found id: ""
	I1204 21:19:14.305804   75464 logs.go:282] 0 containers: []
	W1204 21:19:14.305813   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:14.305822   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:14.305834   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:14.361227   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:14.361274   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:14.375013   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:14.375046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:14.444904   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:14.444945   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:14.444958   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:14.523934   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:14.523969   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:12.565614   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:14.565696   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.763989   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:13.122823   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:15.124232   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.622977   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.063306   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:17.076624   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:17.076675   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:17.110681   75464 cri.go:89] found id: ""
	I1204 21:19:17.110721   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.110744   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:17.110756   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:17.110816   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:17.150695   75464 cri.go:89] found id: ""
	I1204 21:19:17.150716   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.150724   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:17.150730   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:17.150777   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:17.187712   75464 cri.go:89] found id: ""
	I1204 21:19:17.187745   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.187757   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:17.187765   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:17.187826   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:17.220349   75464 cri.go:89] found id: ""
	I1204 21:19:17.220377   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.220388   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:17.220396   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:17.220463   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:17.254691   75464 cri.go:89] found id: ""
	I1204 21:19:17.254724   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.254736   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:17.254746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:17.254869   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:17.287163   75464 cri.go:89] found id: ""
	I1204 21:19:17.287191   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.287200   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:17.287206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:17.287264   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:17.318924   75464 cri.go:89] found id: ""
	I1204 21:19:17.318949   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.318957   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:17.318963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:17.319011   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:17.351074   75464 cri.go:89] found id: ""
	I1204 21:19:17.351106   75464 logs.go:282] 0 containers: []
	W1204 21:19:17.351119   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:17.351128   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:17.351143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:17.404999   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:17.405037   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:17.419781   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:17.419814   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:17.485638   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:17.485659   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:17.485670   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:17.568851   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:17.568885   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:20.107005   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:20.120184   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:20.120257   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:20.153375   75464 cri.go:89] found id: ""
	I1204 21:19:20.153404   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.153413   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:20.153419   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:20.153475   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:20.192102   75464 cri.go:89] found id: ""
	I1204 21:19:20.192129   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.192141   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:20.192148   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:20.192213   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:20.235702   75464 cri.go:89] found id: ""
	I1204 21:19:20.235730   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.235740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:20.235747   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:20.235823   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:20.272357   75464 cri.go:89] found id: ""
	I1204 21:19:20.272385   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.272397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:20.272406   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:20.272477   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:20.307784   75464 cri.go:89] found id: ""
	I1204 21:19:20.307809   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.307820   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:20.307827   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:20.307889   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:20.339469   75464 cri.go:89] found id: ""
	I1204 21:19:20.339504   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.339514   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:20.339522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:20.339586   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:20.369973   75464 cri.go:89] found id: ""
	I1204 21:19:20.369996   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.370003   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:20.370010   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:20.370081   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:20.400569   75464 cri.go:89] found id: ""
	I1204 21:19:20.400589   75464 logs.go:282] 0 containers: []
	W1204 21:19:20.400596   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:20.400604   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:20.400618   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:20.449274   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:20.449316   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:20.463556   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:20.463589   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:20.534760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:20.534779   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:20.534791   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:20.613205   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:20.613234   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:17.064355   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.566643   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:17.764939   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:20.265576   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:19.624775   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.124297   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:23.149411   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:23.163040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:23.163104   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:23.198689   75464 cri.go:89] found id: ""
	I1204 21:19:23.198721   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.198730   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:23.198736   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:23.198789   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:23.229754   75464 cri.go:89] found id: ""
	I1204 21:19:23.229783   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.229792   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:23.229797   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:23.229867   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:23.263366   75464 cri.go:89] found id: ""
	I1204 21:19:23.263406   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.263418   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:23.263425   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:23.263523   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:23.308773   75464 cri.go:89] found id: ""
	I1204 21:19:23.308797   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.308805   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:23.308811   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:23.308858   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:23.344573   75464 cri.go:89] found id: ""
	I1204 21:19:23.344600   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.344613   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:23.344620   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:23.344689   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:23.375218   75464 cri.go:89] found id: ""
	I1204 21:19:23.375244   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.375253   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:23.375259   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:23.375321   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:23.405878   75464 cri.go:89] found id: ""
	I1204 21:19:23.405913   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.405923   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:23.405929   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:23.405979   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:23.442547   75464 cri.go:89] found id: ""
	I1204 21:19:23.442572   75464 logs.go:282] 0 containers: []
	W1204 21:19:23.442580   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:23.442588   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:23.442599   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:23.457476   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:23.457503   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:23.526060   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:23.526088   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:23.526153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:23.606683   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:23.606729   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:23.648224   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:23.648266   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:26.203216   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:26.215838   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:26.215886   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:26.248425   75464 cri.go:89] found id: ""
	I1204 21:19:26.248461   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.248474   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:26.248490   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:26.248558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:26.282982   75464 cri.go:89] found id: ""
	I1204 21:19:26.283011   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.283022   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:26.283030   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:26.283094   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:22.064831   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.565123   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:22.763526   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.764364   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.764973   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:24.624174   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.624220   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:26.316656   75464 cri.go:89] found id: ""
	I1204 21:19:26.316690   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.316702   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:26.316710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:26.316778   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:26.352730   75464 cri.go:89] found id: ""
	I1204 21:19:26.352758   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.352766   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:26.352772   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:26.352819   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:26.385955   75464 cri.go:89] found id: ""
	I1204 21:19:26.385981   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.385991   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:26.386000   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:26.386065   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:26.418814   75464 cri.go:89] found id: ""
	I1204 21:19:26.418838   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.418846   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:26.418852   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:26.418900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:26.455442   75464 cri.go:89] found id: ""
	I1204 21:19:26.455471   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.455483   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:26.455491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:26.455561   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:26.498287   75464 cri.go:89] found id: ""
	I1204 21:19:26.498314   75464 logs.go:282] 0 containers: []
	W1204 21:19:26.498322   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:26.498331   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:26.498345   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:26.512282   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:26.512312   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:26.576340   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:26.576366   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:26.576383   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:26.656234   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:26.656272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:26.692676   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:26.692705   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.246548   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:29.261241   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:29.261310   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:29.297940   75464 cri.go:89] found id: ""
	I1204 21:19:29.297975   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.297987   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:29.297995   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:29.298060   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:29.330887   75464 cri.go:89] found id: ""
	I1204 21:19:29.330918   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.330930   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:29.330937   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:29.331001   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:29.364114   75464 cri.go:89] found id: ""
	I1204 21:19:29.364145   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.364152   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:29.364158   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:29.364214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:29.397320   75464 cri.go:89] found id: ""
	I1204 21:19:29.397349   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.397357   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:29.397363   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:29.397410   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:29.430850   75464 cri.go:89] found id: ""
	I1204 21:19:29.430880   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.430892   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:29.430900   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:29.430965   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:29.464447   75464 cri.go:89] found id: ""
	I1204 21:19:29.464475   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.464484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:29.464498   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:29.464564   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:29.497112   75464 cri.go:89] found id: ""
	I1204 21:19:29.497146   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.497158   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:29.497166   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:29.497229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:29.533048   75464 cri.go:89] found id: ""
	I1204 21:19:29.533071   75464 logs.go:282] 0 containers: []
	W1204 21:19:29.533080   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:29.533088   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:29.533099   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:29.584390   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:29.584424   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:29.598341   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:29.598369   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:29.663240   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:29.663264   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:29.663278   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:29.744146   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:29.744184   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:27.064827   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.065174   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.565105   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:28.765480   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.265234   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:29.123831   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:31.623570   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:32.282931   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:32.296622   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:32.296683   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:32.330253   75464 cri.go:89] found id: ""
	I1204 21:19:32.330285   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.330297   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:32.330305   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:32.330370   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:32.363547   75464 cri.go:89] found id: ""
	I1204 21:19:32.363575   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.363588   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:32.363596   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:32.363661   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:32.396745   75464 cri.go:89] found id: ""
	I1204 21:19:32.396770   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.396781   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:32.396790   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:32.396851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:32.432533   75464 cri.go:89] found id: ""
	I1204 21:19:32.432559   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.432569   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:32.432577   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:32.432640   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:32.470292   75464 cri.go:89] found id: ""
	I1204 21:19:32.470317   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.470327   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:32.470335   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:32.470401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:32.502791   75464 cri.go:89] found id: ""
	I1204 21:19:32.502817   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.502824   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:32.502835   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:32.502900   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:32.536220   75464 cri.go:89] found id: ""
	I1204 21:19:32.536246   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.536254   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:32.536286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:32.536344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:32.570072   75464 cri.go:89] found id: ""
	I1204 21:19:32.570094   75464 logs.go:282] 0 containers: []
	W1204 21:19:32.570102   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:32.570110   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:32.570127   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:32.624916   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:32.624964   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:32.638299   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:32.638328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:32.704827   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:32.704855   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:32.704873   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:32.782324   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:32.782356   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:35.324136   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:35.337071   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:35.337132   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:35.368651   75464 cri.go:89] found id: ""
	I1204 21:19:35.368672   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.368679   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:35.368685   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:35.368731   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:35.402069   75464 cri.go:89] found id: ""
	I1204 21:19:35.402088   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.402099   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:35.402105   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:35.402156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:35.432328   75464 cri.go:89] found id: ""
	I1204 21:19:35.432356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.432367   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:35.432380   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:35.432440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:35.465334   75464 cri.go:89] found id: ""
	I1204 21:19:35.465356   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.465363   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:35.465369   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:35.465440   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:35.497416   75464 cri.go:89] found id: ""
	I1204 21:19:35.497449   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.497462   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:35.497474   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:35.497535   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:35.533106   75464 cri.go:89] found id: ""
	I1204 21:19:35.533134   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.533145   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:35.533154   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:35.533216   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:35.570519   75464 cri.go:89] found id: ""
	I1204 21:19:35.570546   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.570555   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:35.570562   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:35.570628   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:35.601380   75464 cri.go:89] found id: ""
	I1204 21:19:35.601413   75464 logs.go:282] 0 containers: []
	W1204 21:19:35.601424   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:35.601434   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:35.601455   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:35.656383   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:35.656420   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:35.671667   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:35.671696   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:35.737690   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:35.737716   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:35.737733   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:35.818129   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:35.818165   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:34.063889   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:36.064864   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.765136   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.765598   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:33.624840   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:35.624972   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.356596   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:38.369177   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:38.369235   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:38.401263   75464 cri.go:89] found id: ""
	I1204 21:19:38.401289   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.401301   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:38.401308   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:38.401379   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:38.432751   75464 cri.go:89] found id: ""
	I1204 21:19:38.432777   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.432786   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:38.432792   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:38.432853   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:38.465866   75464 cri.go:89] found id: ""
	I1204 21:19:38.465889   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.465898   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:38.465904   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:38.465954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:38.508720   75464 cri.go:89] found id: ""
	I1204 21:19:38.508752   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.508763   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:38.508771   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:38.508827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:38.543609   75464 cri.go:89] found id: ""
	I1204 21:19:38.543640   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.543649   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:38.543654   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:38.543728   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:38.579205   75464 cri.go:89] found id: ""
	I1204 21:19:38.579225   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.579233   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:38.579239   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:38.579286   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:38.616446   75464 cri.go:89] found id: ""
	I1204 21:19:38.616480   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.616492   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:38.616500   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:38.616563   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:38.651847   75464 cri.go:89] found id: ""
	I1204 21:19:38.651879   75464 logs.go:282] 0 containers: []
	W1204 21:19:38.651893   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:38.651905   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:38.651920   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:38.730904   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:38.730940   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:38.768958   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:38.768987   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:38.818879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:38.818917   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:38.832139   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:38.832168   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:38.904761   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:38.065085   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.066022   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.264497   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.264905   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:38.123324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:40.123499   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.623457   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:41.405046   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:41.417497   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:41.417578   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:41.450609   75464 cri.go:89] found id: ""
	I1204 21:19:41.450638   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.450649   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:41.450657   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:41.450725   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:41.486098   75464 cri.go:89] found id: ""
	I1204 21:19:41.486127   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.486135   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:41.486146   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:41.486218   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:41.520182   75464 cri.go:89] found id: ""
	I1204 21:19:41.520212   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.520225   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:41.520233   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:41.520305   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:41.551840   75464 cri.go:89] found id: ""
	I1204 21:19:41.551862   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.551870   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:41.551876   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:41.551928   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:41.584411   75464 cri.go:89] found id: ""
	I1204 21:19:41.584441   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.584448   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:41.584453   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:41.584500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:41.614161   75464 cri.go:89] found id: ""
	I1204 21:19:41.614184   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.614199   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:41.614208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:41.614263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:41.645608   75464 cri.go:89] found id: ""
	I1204 21:19:41.645630   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.645637   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:41.645642   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:41.645688   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:41.676521   75464 cri.go:89] found id: ""
	I1204 21:19:41.676544   75464 logs.go:282] 0 containers: []
	W1204 21:19:41.676552   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:41.676559   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:41.676570   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:41.726608   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:41.726633   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:41.739110   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:41.739134   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:41.810706   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:41.810727   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:41.810742   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:41.895725   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:41.895757   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:44.435032   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:44.449155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:44.449223   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:44.479366   75464 cri.go:89] found id: ""
	I1204 21:19:44.479415   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.479424   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:44.479430   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:44.479480   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:44.520338   75464 cri.go:89] found id: ""
	I1204 21:19:44.520365   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.520374   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:44.520379   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:44.520443   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:44.554736   75464 cri.go:89] found id: ""
	I1204 21:19:44.554765   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.554773   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:44.554779   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:44.554829   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:44.592957   75464 cri.go:89] found id: ""
	I1204 21:19:44.592980   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.592987   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:44.592993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:44.593041   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:44.626514   75464 cri.go:89] found id: ""
	I1204 21:19:44.626542   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.626551   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:44.626558   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:44.626624   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:44.667868   75464 cri.go:89] found id: ""
	I1204 21:19:44.667901   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.667913   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:44.667919   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:44.667968   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:44.703653   75464 cri.go:89] found id: ""
	I1204 21:19:44.703688   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.703699   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:44.703706   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:44.703766   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:44.737474   75464 cri.go:89] found id: ""
	I1204 21:19:44.737511   75464 logs.go:282] 0 containers: []
	W1204 21:19:44.737523   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:44.737534   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:44.737549   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:44.787115   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:44.787146   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:44.799735   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:44.799765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:44.861160   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:44.861179   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:44.861200   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:44.937758   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:44.937792   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:42.564575   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.565307   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:42.269222   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.764730   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:44.624230   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.124252   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.474604   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:47.486621   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:47.486680   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:47.522827   75464 cri.go:89] found id: ""
	I1204 21:19:47.522856   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.522870   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:47.522877   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:47.522938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:47.553741   75464 cri.go:89] found id: ""
	I1204 21:19:47.553763   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.553771   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:47.553777   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:47.553837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:47.610696   75464 cri.go:89] found id: ""
	I1204 21:19:47.610719   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.610730   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:47.610737   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:47.610803   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:47.645330   75464 cri.go:89] found id: ""
	I1204 21:19:47.645357   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.645367   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:47.645374   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:47.645431   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:47.680410   75464 cri.go:89] found id: ""
	I1204 21:19:47.680436   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.680444   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:47.680450   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:47.680499   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:47.712333   75464 cri.go:89] found id: ""
	I1204 21:19:47.712365   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.712376   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:47.712384   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:47.712442   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:47.749995   75464 cri.go:89] found id: ""
	I1204 21:19:47.750027   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.750039   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:47.750047   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:47.750110   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:47.786953   75464 cri.go:89] found id: ""
	I1204 21:19:47.786978   75464 logs.go:282] 0 containers: []
	W1204 21:19:47.786988   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:47.786996   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:47.787008   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:47.853534   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:47.853561   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:47.853576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:47.934237   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:47.934273   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.976010   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:47.976046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:48.027502   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:48.027537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.541987   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:50.555163   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:50.555246   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:50.588513   75464 cri.go:89] found id: ""
	I1204 21:19:50.588545   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.588555   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:50.588563   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:50.588618   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:50.623124   75464 cri.go:89] found id: ""
	I1204 21:19:50.623155   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.623165   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:50.623175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:50.623240   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:50.656302   75464 cri.go:89] found id: ""
	I1204 21:19:50.656334   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.656347   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:50.656353   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:50.656421   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:50.688580   75464 cri.go:89] found id: ""
	I1204 21:19:50.688609   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.688621   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:50.688629   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:50.688700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:50.721955   75464 cri.go:89] found id: ""
	I1204 21:19:50.721979   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.721987   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:50.721993   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:50.722047   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:50.755531   75464 cri.go:89] found id: ""
	I1204 21:19:50.755560   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.755571   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:50.755579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:50.755637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:50.789773   75464 cri.go:89] found id: ""
	I1204 21:19:50.789805   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.789816   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:50.789823   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:50.789890   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:50.821168   75464 cri.go:89] found id: ""
	I1204 21:19:50.821196   75464 logs.go:282] 0 containers: []
	W1204 21:19:50.821207   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:50.821216   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:50.821230   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:50.871378   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:50.871406   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:50.883349   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:50.883387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:50.953103   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:50.953129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:50.953143   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:51.032209   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:51.032240   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:47.065199   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.065498   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.565332   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:47.264727   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.765618   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:51.765674   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:49.623785   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:52.124390   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:53.569126   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:53.582100   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:53.582167   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:53.613919   75464 cri.go:89] found id: ""
	I1204 21:19:53.613947   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.613958   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:53.613965   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:53.614031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:53.649057   75464 cri.go:89] found id: ""
	I1204 21:19:53.649083   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.649090   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:53.649096   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:53.649153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:53.685867   75464 cri.go:89] found id: ""
	I1204 21:19:53.685903   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.685915   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:53.685924   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:53.685983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:53.723661   75464 cri.go:89] found id: ""
	I1204 21:19:53.723690   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.723702   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:53.723710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:53.723774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:53.768252   75464 cri.go:89] found id: ""
	I1204 21:19:53.768274   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.768281   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:53.768286   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:53.768334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:53.806460   75464 cri.go:89] found id: ""
	I1204 21:19:53.806503   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.806512   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:53.806522   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:53.806577   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:53.839334   75464 cri.go:89] found id: ""
	I1204 21:19:53.839362   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.839382   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:53.839391   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:53.839452   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:53.873985   75464 cri.go:89] found id: ""
	I1204 21:19:53.874013   75464 logs.go:282] 0 containers: []
	W1204 21:19:53.874021   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:53.874029   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:53.874046   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:53.929061   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:53.929101   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:53.943156   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:53.943183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:54.023885   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:54.023914   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:54.023927   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:54.126662   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:54.126691   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:53.566343   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.064417   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.263908   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.265412   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:54.623051   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.623438   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:56.664579   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:56.676785   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:56.676835   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:56.715929   75464 cri.go:89] found id: ""
	I1204 21:19:56.715953   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.715964   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:56.715971   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:56.716026   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:56.747118   75464 cri.go:89] found id: ""
	I1204 21:19:56.747139   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.747146   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:56.747175   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:56.747225   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:56.777600   75464 cri.go:89] found id: ""
	I1204 21:19:56.777622   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.777628   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:56.777634   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:56.777684   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:56.808759   75464 cri.go:89] found id: ""
	I1204 21:19:56.808780   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.808787   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:56.808792   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:56.808849   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:56.838236   75464 cri.go:89] found id: ""
	I1204 21:19:56.838263   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.838274   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:56.838280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:56.838336   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:56.866838   75464 cri.go:89] found id: ""
	I1204 21:19:56.866865   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.866875   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:56.866883   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:56.866938   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:56.897474   75464 cri.go:89] found id: ""
	I1204 21:19:56.897496   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.897504   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:56.897509   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:56.897566   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:56.929263   75464 cri.go:89] found id: ""
	I1204 21:19:56.929286   75464 logs.go:282] 0 containers: []
	W1204 21:19:56.929294   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:56.929302   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:19:56.929311   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:19:56.980231   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:19:56.980256   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:19:56.991901   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:19:56.991928   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:19:57.068154   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:57.068172   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:57.068183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:19:57.147865   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:19:57.147903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:19:59.686011   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:19:59.699101   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:19:59.699156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:19:59.742522   75464 cri.go:89] found id: ""
	I1204 21:19:59.742554   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.742565   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:19:59.742573   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:19:59.742637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:19:59.785313   75464 cri.go:89] found id: ""
	I1204 21:19:59.785345   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.785357   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:19:59.785364   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:19:59.785423   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:19:59.821473   75464 cri.go:89] found id: ""
	I1204 21:19:59.821508   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.821520   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:19:59.821527   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:19:59.821585   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:19:59.857990   75464 cri.go:89] found id: ""
	I1204 21:19:59.858012   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.858020   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:19:59.858025   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:19:59.858077   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:19:59.895434   75464 cri.go:89] found id: ""
	I1204 21:19:59.895465   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.895478   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:19:59.895486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:19:59.895546   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:19:59.929076   75464 cri.go:89] found id: ""
	I1204 21:19:59.929099   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.929110   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:19:59.929118   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:19:59.929180   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:19:59.962121   75464 cri.go:89] found id: ""
	I1204 21:19:59.962161   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.962173   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:19:59.962181   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:19:59.962244   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:19:59.999074   75464 cri.go:89] found id: ""
	I1204 21:19:59.999103   75464 logs.go:282] 0 containers: []
	W1204 21:19:59.999115   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:19:59.999126   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:19:59.999138   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:00.081841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:00.081888   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:00.120537   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:00.120576   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:00.171472   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:00.171506   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:00.184739   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:00.184770   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:00.256589   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:19:58.563943   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.564520   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:58.764786   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:00.765286   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:19:59.122868   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:01.624133   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.757225   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:02.771088   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:02.771156   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:02.808742   75464 cri.go:89] found id: ""
	I1204 21:20:02.808770   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.808781   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:02.808788   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:02.808851   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:02.846517   75464 cri.go:89] found id: ""
	I1204 21:20:02.846539   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.846548   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:02.846553   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:02.846600   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:02.879903   75464 cri.go:89] found id: ""
	I1204 21:20:02.879934   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.879943   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:02.879948   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:02.879995   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:02.910040   75464 cri.go:89] found id: ""
	I1204 21:20:02.910072   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.910083   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:02.910091   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:02.910153   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:02.941525   75464 cri.go:89] found id: ""
	I1204 21:20:02.941552   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.941562   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:02.941570   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:02.941637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:02.977450   75464 cri.go:89] found id: ""
	I1204 21:20:02.977476   75464 logs.go:282] 0 containers: []
	W1204 21:20:02.977484   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:02.977490   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:02.977547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:03.007386   75464 cri.go:89] found id: ""
	I1204 21:20:03.007422   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.007433   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:03.007448   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:03.007508   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:03.040015   75464 cri.go:89] found id: ""
	I1204 21:20:03.040038   75464 logs.go:282] 0 containers: []
	W1204 21:20:03.040049   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:03.040058   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:03.040068   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:03.092371   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:03.092397   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:03.104747   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:03.104765   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:03.167760   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:03.167784   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:03.167799   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:03.242972   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:03.243010   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:05.783874   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:05.796340   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:05.796401   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:05.829068   75464 cri.go:89] found id: ""
	I1204 21:20:05.829094   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.829105   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:05.829112   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:05.829169   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:05.863998   75464 cri.go:89] found id: ""
	I1204 21:20:05.864027   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.864036   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:05.864042   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:05.864096   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:05.899645   75464 cri.go:89] found id: ""
	I1204 21:20:05.899669   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.899677   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:05.899682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:05.899727   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:05.935815   75464 cri.go:89] found id: ""
	I1204 21:20:05.935840   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.935848   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:05.935854   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:05.935901   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:05.972284   75464 cri.go:89] found id: ""
	I1204 21:20:05.972308   75464 logs.go:282] 0 containers: []
	W1204 21:20:05.972321   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:05.972326   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:05.972372   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:06.007217   75464 cri.go:89] found id: ""
	I1204 21:20:06.007261   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.007273   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:06.007280   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:06.007338   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:06.042158   75464 cri.go:89] found id: ""
	I1204 21:20:06.042190   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.042201   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:06.042208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:06.042280   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:06.075199   75464 cri.go:89] found id: ""
	I1204 21:20:06.075223   75464 logs.go:282] 0 containers: []
	W1204 21:20:06.075230   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:06.075237   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:06.075248   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:06.148255   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:06.148286   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:06.191454   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:06.191478   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:06.243952   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:06.243979   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:06.256355   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:06.256381   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:20:02.565050   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.064733   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:02.765643   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:05.263861   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:04.123109   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:06.123349   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	W1204 21:20:06.323958   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:08.824582   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:08.836724   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:08.836793   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:08.868526   75464 cri.go:89] found id: ""
	I1204 21:20:08.868596   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.868611   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:08.868619   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:08.868679   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:08.899088   75464 cri.go:89] found id: ""
	I1204 21:20:08.899114   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.899123   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:08.899128   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:08.899181   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:08.929116   75464 cri.go:89] found id: ""
	I1204 21:20:08.929145   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.929156   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:08.929164   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:08.929229   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:08.970502   75464 cri.go:89] found id: ""
	I1204 21:20:08.970528   75464 logs.go:282] 0 containers: []
	W1204 21:20:08.970539   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:08.970547   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:08.970610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:09.000619   75464 cri.go:89] found id: ""
	I1204 21:20:09.000644   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.000652   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:09.000658   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:09.000715   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:09.031597   75464 cri.go:89] found id: ""
	I1204 21:20:09.031624   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.031634   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:09.031641   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:09.031700   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:09.063615   75464 cri.go:89] found id: ""
	I1204 21:20:09.063639   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.063646   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:09.063651   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:09.063708   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:09.096291   75464 cri.go:89] found id: ""
	I1204 21:20:09.096322   75464 logs.go:282] 0 containers: []
	W1204 21:20:09.096333   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:09.096343   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:09.096357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:09.169976   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:09.170009   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:09.206514   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:09.206537   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:09.257587   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:09.257614   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:09.269939   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:09.269962   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:09.334350   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:07.563758   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.564014   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.564441   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:07.264169   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:09.265385   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.265607   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:08.622813   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:10.624747   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:11.835270   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:11.848192   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:11.848249   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:11.880377   75464 cri.go:89] found id: ""
	I1204 21:20:11.880409   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.880422   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:11.880429   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:11.880495   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:11.914800   75464 cri.go:89] found id: ""
	I1204 21:20:11.914832   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.914844   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:11.914852   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:11.914918   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:11.950520   75464 cri.go:89] found id: ""
	I1204 21:20:11.950545   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.950553   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:11.950559   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:11.950611   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:11.983909   75464 cri.go:89] found id: ""
	I1204 21:20:11.983934   75464 logs.go:282] 0 containers: []
	W1204 21:20:11.983944   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:11.983953   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:11.984017   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:12.020457   75464 cri.go:89] found id: ""
	I1204 21:20:12.020488   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.020505   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:12.020513   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:12.020581   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:12.054630   75464 cri.go:89] found id: ""
	I1204 21:20:12.054663   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.054674   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:12.054682   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:12.054747   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:12.089172   75464 cri.go:89] found id: ""
	I1204 21:20:12.089195   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.089202   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:12.089208   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:12.089267   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:12.123979   75464 cri.go:89] found id: ""
	I1204 21:20:12.124009   75464 logs.go:282] 0 containers: []
	W1204 21:20:12.124020   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:12.124039   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:12.124054   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:12.191368   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:12.191414   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:12.191432   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:12.272985   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:12.273029   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:12.310427   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:12.310459   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:12.363183   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:12.363225   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:14.876599   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:14.889708   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:14.889784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:14.922789   75464 cri.go:89] found id: ""
	I1204 21:20:14.922819   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.922829   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:14.922835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:14.922882   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:14.953998   75464 cri.go:89] found id: ""
	I1204 21:20:14.954026   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.954038   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:14.954044   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:14.954108   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:14.983608   75464 cri.go:89] found id: ""
	I1204 21:20:14.983635   75464 logs.go:282] 0 containers: []
	W1204 21:20:14.983646   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:14.983653   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:14.983707   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:15.016982   75464 cri.go:89] found id: ""
	I1204 21:20:15.017007   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.017015   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:15.017020   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:15.017070   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:15.051642   75464 cri.go:89] found id: ""
	I1204 21:20:15.051672   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.051683   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:15.051690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:15.051792   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:15.084250   75464 cri.go:89] found id: ""
	I1204 21:20:15.084279   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.084289   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:15.084297   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:15.084364   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:15.119910   75464 cri.go:89] found id: ""
	I1204 21:20:15.119943   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.119953   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:15.119965   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:15.120025   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:15.154270   75464 cri.go:89] found id: ""
	I1204 21:20:15.154301   75464 logs.go:282] 0 containers: []
	W1204 21:20:15.154312   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:15.154322   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:15.154336   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:15.205075   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:15.205109   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:15.218104   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:15.218130   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:15.285162   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:15.285187   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:15.285209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:15.367003   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:15.367040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:13.566393   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:16.069318   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.266167   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.763670   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:13.122812   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:15.125830   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.623065   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.909835   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:17.921899   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:17.921954   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:17.954678   75464 cri.go:89] found id: ""
	I1204 21:20:17.954708   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.954717   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:17.954723   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:17.954776   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:17.984522   75464 cri.go:89] found id: ""
	I1204 21:20:17.984545   75464 logs.go:282] 0 containers: []
	W1204 21:20:17.984555   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:17.984560   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:17.984607   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:18.016731   75464 cri.go:89] found id: ""
	I1204 21:20:18.016754   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.016763   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:18.016768   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:18.016820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:18.050104   75464 cri.go:89] found id: ""
	I1204 21:20:18.050136   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.050147   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:18.050155   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:18.050221   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:18.083944   75464 cri.go:89] found id: ""
	I1204 21:20:18.083984   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.084006   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:18.084015   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:18.084084   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:18.116170   75464 cri.go:89] found id: ""
	I1204 21:20:18.116203   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.116215   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:18.116223   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:18.116292   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:18.147348   75464 cri.go:89] found id: ""
	I1204 21:20:18.147395   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.147407   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:18.147415   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:18.147473   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:18.177782   75464 cri.go:89] found id: ""
	I1204 21:20:18.177805   75464 logs.go:282] 0 containers: []
	W1204 21:20:18.177816   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:18.177827   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:18.177840   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:18.227464   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:18.227494   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:18.239741   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:18.239772   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:18.310732   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:18.310752   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:18.310763   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.389626   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:18.389659   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:20.926749   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:20.939710   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:20.939797   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:20.972464   75464 cri.go:89] found id: ""
	I1204 21:20:20.972488   75464 logs.go:282] 0 containers: []
	W1204 21:20:20.972497   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:20.972506   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:20.972568   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:21.010568   75464 cri.go:89] found id: ""
	I1204 21:20:21.010597   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.010610   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:21.010618   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:21.010678   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:21.046145   75464 cri.go:89] found id: ""
	I1204 21:20:21.046172   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.046183   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:21.046191   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:21.046263   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:21.078460   75464 cri.go:89] found id: ""
	I1204 21:20:21.078488   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.078496   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:21.078502   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:21.078569   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:21.117274   75464 cri.go:89] found id: ""
	I1204 21:20:21.117303   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.117314   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:21.117320   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:21.117366   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:21.152375   75464 cri.go:89] found id: ""
	I1204 21:20:21.152408   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.152419   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:21.152427   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:21.152496   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:21.185933   75464 cri.go:89] found id: ""
	I1204 21:20:21.185966   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.185975   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:21.185981   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:21.186042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:21.219289   75464 cri.go:89] found id: ""
	I1204 21:20:21.219325   75464 logs.go:282] 0 containers: []
	W1204 21:20:21.219338   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:21.219350   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:21.219363   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:21.232385   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:21.232415   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:21.298766   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:21.298793   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:21.298808   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:18.565873   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.065819   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:17.763871   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.765846   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:19.623518   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.624117   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:21.376741   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:21.376777   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:21.414649   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:21.414682   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:23.963472   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:23.976644   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:23.976709   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:24.010598   75464 cri.go:89] found id: ""
	I1204 21:20:24.010626   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.010637   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:24.010645   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:24.010703   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:24.045479   75464 cri.go:89] found id: ""
	I1204 21:20:24.045509   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.045529   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:24.045537   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:24.045599   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:24.081181   75464 cri.go:89] found id: ""
	I1204 21:20:24.081215   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.081235   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:24.081243   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:24.081309   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:24.113823   75464 cri.go:89] found id: ""
	I1204 21:20:24.113847   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.113857   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:24.113864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:24.113927   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:24.149178   75464 cri.go:89] found id: ""
	I1204 21:20:24.149205   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.149216   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:24.149224   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:24.149289   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:24.183304   75464 cri.go:89] found id: ""
	I1204 21:20:24.183339   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.183350   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:24.183359   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:24.183448   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:24.214999   75464 cri.go:89] found id: ""
	I1204 21:20:24.215023   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.215034   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:24.215042   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:24.215107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:24.247278   75464 cri.go:89] found id: ""
	I1204 21:20:24.247312   75464 logs.go:282] 0 containers: []
	W1204 21:20:24.247323   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:24.247354   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:24.247387   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:24.302879   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:24.302913   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:24.315674   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:24.315697   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:24.382394   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:24.382422   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:24.382436   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:24.462763   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:24.462796   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:23.564202   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:25.564917   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:22.265442   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.764901   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:24.124035   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:26.124661   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.002577   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:27.015256   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:27.015324   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:27.049626   75464 cri.go:89] found id: ""
	I1204 21:20:27.049657   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.049669   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:27.049677   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:27.049733   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:27.085312   75464 cri.go:89] found id: ""
	I1204 21:20:27.085341   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.085354   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:27.085362   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:27.085417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:27.119898   75464 cri.go:89] found id: ""
	I1204 21:20:27.119928   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.119939   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:27.119947   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:27.120010   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:27.153605   75464 cri.go:89] found id: ""
	I1204 21:20:27.153642   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.153651   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:27.153657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:27.153724   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:27.191002   75464 cri.go:89] found id: ""
	I1204 21:20:27.191027   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.191038   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:27.191045   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:27.191107   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:27.226469   75464 cri.go:89] found id: ""
	I1204 21:20:27.226495   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.226506   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:27.226515   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:27.226579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:27.258586   75464 cri.go:89] found id: ""
	I1204 21:20:27.258613   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.258623   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:27.258630   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:27.258694   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:27.293119   75464 cri.go:89] found id: ""
	I1204 21:20:27.293156   75464 logs.go:282] 0 containers: []
	W1204 21:20:27.293165   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:27.293174   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:27.293187   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:27.346870   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:27.346903   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:27.360448   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:27.360487   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:27.431571   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:27.431597   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:27.431613   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:27.509664   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:27.509698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:30.049120   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:30.063294   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:30.063360   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:30.097334   75464 cri.go:89] found id: ""
	I1204 21:20:30.097364   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.097376   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:30.097383   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:30.097457   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:30.132734   75464 cri.go:89] found id: ""
	I1204 21:20:30.132757   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.132765   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:30.132771   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:30.132820   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:30.166539   75464 cri.go:89] found id: ""
	I1204 21:20:30.166565   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.166573   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:30.166579   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:30.166637   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:30.201953   75464 cri.go:89] found id: ""
	I1204 21:20:30.201993   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.202007   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:30.202016   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:30.202089   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:30.239062   75464 cri.go:89] found id: ""
	I1204 21:20:30.239102   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.239116   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:30.239132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:30.239200   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:30.282344   75464 cri.go:89] found id: ""
	I1204 21:20:30.282374   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.282383   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:30.282389   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:30.282439   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:30.316615   75464 cri.go:89] found id: ""
	I1204 21:20:30.316642   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.316653   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:30.316661   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:30.316764   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:30.352333   75464 cri.go:89] found id: ""
	I1204 21:20:30.352358   75464 logs.go:282] 0 containers: []
	W1204 21:20:30.352368   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:30.352380   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:30.352393   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:30.406022   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:30.406058   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:30.419790   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:30.419819   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:30.485693   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:30.485717   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:30.485738   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:30.569313   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:30.569357   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:27.565367   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.064552   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:27.266699   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:29.765109   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:28.623821   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:30.628815   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.107542   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:33.121934   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:33.122007   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:33.154672   75464 cri.go:89] found id: ""
	I1204 21:20:33.154698   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.154709   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:33.154717   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:33.154784   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:33.189186   75464 cri.go:89] found id: ""
	I1204 21:20:33.189218   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.189229   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:33.189236   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:33.189291   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:33.217618   75464 cri.go:89] found id: ""
	I1204 21:20:33.217637   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.217651   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:33.217657   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:33.217704   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:33.246895   75464 cri.go:89] found id: ""
	I1204 21:20:33.246916   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.246923   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:33.246928   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:33.246970   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:33.278698   75464 cri.go:89] found id: ""
	I1204 21:20:33.278718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.278725   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:33.278731   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:33.278771   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:33.307671   75464 cri.go:89] found id: ""
	I1204 21:20:33.307703   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.307721   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:33.307729   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:33.307791   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:33.342929   75464 cri.go:89] found id: ""
	I1204 21:20:33.342950   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.342958   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:33.342963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:33.343009   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:33.374686   75464 cri.go:89] found id: ""
	I1204 21:20:33.374718   75464 logs.go:282] 0 containers: []
	W1204 21:20:33.374730   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:33.374741   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:33.374758   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:33.424117   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:33.424153   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:33.437691   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:33.437724   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:33.517172   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:33.517196   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:33.517209   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:33.597299   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:33.597341   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.137849   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:36.152485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:36.152544   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:36.186867   75464 cri.go:89] found id: ""
	I1204 21:20:36.186895   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.186906   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:36.186920   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:36.186983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:36.220628   75464 cri.go:89] found id: ""
	I1204 21:20:36.220658   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.220671   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:36.220679   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:36.220735   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:36.254264   75464 cri.go:89] found id: ""
	I1204 21:20:36.254298   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.254310   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:36.254318   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:36.254384   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:36.290929   75464 cri.go:89] found id: ""
	I1204 21:20:36.290956   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.290964   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:36.290970   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:36.291016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:32.566714   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.064488   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:32.266257   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:34.764171   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.764331   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:33.123727   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:35.623512   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:37.623921   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:36.326967   75464 cri.go:89] found id: ""
	I1204 21:20:36.326991   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.326999   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:36.327004   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:36.327072   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:36.366892   75464 cri.go:89] found id: ""
	I1204 21:20:36.366916   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.366924   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:36.366930   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:36.366990   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:36.405671   75464 cri.go:89] found id: ""
	I1204 21:20:36.405696   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.405703   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:36.405709   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:36.405762   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:36.439591   75464 cri.go:89] found id: ""
	I1204 21:20:36.439621   75464 logs.go:282] 0 containers: []
	W1204 21:20:36.439628   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:36.439637   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:36.439650   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:36.505710   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:36.505737   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:36.505751   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:36.586111   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:36.586155   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:36.628086   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:36.628121   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:36.680152   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:36.680183   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.194223   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:39.207153   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:39.207230   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:39.240867   75464 cri.go:89] found id: ""
	I1204 21:20:39.240895   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.240903   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:39.240908   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:39.240959   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:39.274704   75464 cri.go:89] found id: ""
	I1204 21:20:39.274735   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.274742   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:39.274748   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:39.274800   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:39.307559   75464 cri.go:89] found id: ""
	I1204 21:20:39.307591   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.307601   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:39.307609   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:39.307671   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:39.355489   75464 cri.go:89] found id: ""
	I1204 21:20:39.355524   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.355536   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:39.355543   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:39.355610   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:39.395885   75464 cri.go:89] found id: ""
	I1204 21:20:39.395909   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.395917   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:39.395923   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:39.395976   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:39.428817   75464 cri.go:89] found id: ""
	I1204 21:20:39.428848   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.428858   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:39.428864   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:39.428929   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:39.463827   75464 cri.go:89] found id: ""
	I1204 21:20:39.463857   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.463870   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:39.463877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:39.463926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:39.496677   75464 cri.go:89] found id: ""
	I1204 21:20:39.496710   75464 logs.go:282] 0 containers: []
	W1204 21:20:39.496721   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:39.496732   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:39.496755   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:39.533759   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:39.533787   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:39.586373   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:39.586409   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:39.599533   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:39.599568   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:39.670139   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:39.670164   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:39.670176   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:37.065197   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.065863   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:41.566053   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:38.765226   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:40.765268   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:39.624452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.123452   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.245896   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:42.260604   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:42.260676   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:42.294051   75464 cri.go:89] found id: ""
	I1204 21:20:42.294078   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.294085   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:42.294094   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:42.294160   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:42.327361   75464 cri.go:89] found id: ""
	I1204 21:20:42.327408   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.327421   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:42.327428   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:42.327482   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:42.358701   75464 cri.go:89] found id: ""
	I1204 21:20:42.358731   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.358740   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:42.358746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:42.358795   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:42.389837   75464 cri.go:89] found id: ""
	I1204 21:20:42.389863   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.389871   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:42.389877   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:42.389926   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:42.430495   75464 cri.go:89] found id: ""
	I1204 21:20:42.430522   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.430534   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:42.430541   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:42.430590   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:42.462918   75464 cri.go:89] found id: ""
	I1204 21:20:42.462949   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.462958   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:42.462963   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:42.463031   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:42.500726   75464 cri.go:89] found id: ""
	I1204 21:20:42.500754   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.500769   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:42.500776   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:42.500842   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:42.538601   75464 cri.go:89] found id: ""
	I1204 21:20:42.538628   75464 logs.go:282] 0 containers: []
	W1204 21:20:42.538635   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:42.538644   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:42.538655   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:42.591308   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:42.591344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:42.604221   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:42.604244   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:42.679954   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:42.679982   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:42.679999   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:42.768383   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:42.768422   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:45.312054   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:45.325206   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:45.325304   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:45.358781   75464 cri.go:89] found id: ""
	I1204 21:20:45.358809   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.358817   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:45.358824   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:45.358874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:45.391920   75464 cri.go:89] found id: ""
	I1204 21:20:45.391945   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.391957   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:45.391964   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:45.392030   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:45.426546   75464 cri.go:89] found id: ""
	I1204 21:20:45.426570   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.426578   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:45.426583   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:45.426633   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:45.459432   75464 cri.go:89] found id: ""
	I1204 21:20:45.459462   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.459472   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:45.459479   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:45.459547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:45.494217   75464 cri.go:89] found id: ""
	I1204 21:20:45.494256   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.494268   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:45.494276   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:45.494352   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:45.531417   75464 cri.go:89] found id: ""
	I1204 21:20:45.531446   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.531458   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:45.531473   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:45.531547   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:45.564973   75464 cri.go:89] found id: ""
	I1204 21:20:45.565005   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.565016   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:45.565024   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:45.565088   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:45.601285   75464 cri.go:89] found id: ""
	I1204 21:20:45.601315   75464 logs.go:282] 0 containers: []
	W1204 21:20:45.601324   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:45.601333   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:45.601344   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:45.656229   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:45.656267   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:45.669851   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:45.669876   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:45.740674   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:45.740704   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:45.740720   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:45.845612   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:45.845657   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:44.065401   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.565091   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:42.765303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.765539   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:44.123533   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:46.123595   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.389508   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:48.401989   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:48.402052   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:48.438477   75464 cri.go:89] found id: ""
	I1204 21:20:48.438502   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.438514   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:48.438521   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:48.438579   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:48.476096   75464 cri.go:89] found id: ""
	I1204 21:20:48.476129   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.476142   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:48.476151   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:48.476219   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:48.514085   75464 cri.go:89] found id: ""
	I1204 21:20:48.514112   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.514124   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:48.514132   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:48.514208   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:48.551360   75464 cri.go:89] found id: ""
	I1204 21:20:48.551409   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.551420   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:48.551428   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:48.551500   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:48.588424   75464 cri.go:89] found id: ""
	I1204 21:20:48.588463   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.588475   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:48.588483   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:48.588552   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:48.622842   75464 cri.go:89] found id: ""
	I1204 21:20:48.622868   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.622876   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:48.622881   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:48.622942   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:48.665525   75464 cri.go:89] found id: ""
	I1204 21:20:48.665575   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.665585   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:48.665592   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:48.665659   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:48.706554   75464 cri.go:89] found id: ""
	I1204 21:20:48.706581   75464 logs.go:282] 0 containers: []
	W1204 21:20:48.706591   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:48.706602   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:48.706617   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:48.757835   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:48.757870   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:48.771967   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:48.772003   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:48.843093   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:48.843123   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:48.843140   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:48.919637   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:48.919681   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:49.064435   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.565505   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:47.265612   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:49.764186   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.766867   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:48.637538   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.123581   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:51.457865   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:51.472751   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:51.472827   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:51.514777   75464 cri.go:89] found id: ""
	I1204 21:20:51.514814   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.514827   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:51.514835   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:51.514904   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:51.563932   75464 cri.go:89] found id: ""
	I1204 21:20:51.563957   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.563968   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:51.563976   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:51.564042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:51.606714   75464 cri.go:89] found id: ""
	I1204 21:20:51.606752   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.606765   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:51.606773   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:51.606837   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:51.641391   75464 cri.go:89] found id: ""
	I1204 21:20:51.641427   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.641438   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:51.641446   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:51.641502   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:51.674971   75464 cri.go:89] found id: ""
	I1204 21:20:51.675000   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.675011   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:51.675019   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:51.675082   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:51.709211   75464 cri.go:89] found id: ""
	I1204 21:20:51.709242   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.709250   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:51.709257   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:51.709306   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:51.742425   75464 cri.go:89] found id: ""
	I1204 21:20:51.742460   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.742472   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:51.742480   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:51.742534   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:51.782292   75464 cri.go:89] found id: ""
	I1204 21:20:51.782339   75464 logs.go:282] 0 containers: []
	W1204 21:20:51.782351   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:51.782361   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:51.782380   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:51.833009   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:51.833040   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:51.846862   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:51.846905   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:51.911100   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:51.911129   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:51.911147   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:51.987841   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:51.987879   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.527097   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:54.541248   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:54.541344   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:54.582747   75464 cri.go:89] found id: ""
	I1204 21:20:54.582772   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.582780   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:54.582785   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:54.582844   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:54.615891   75464 cri.go:89] found id: ""
	I1204 21:20:54.615914   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.615922   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:54.615927   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:54.615983   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:54.648994   75464 cri.go:89] found id: ""
	I1204 21:20:54.649021   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.649031   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:54.649037   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:54.649095   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:54.683000   75464 cri.go:89] found id: ""
	I1204 21:20:54.683026   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.683034   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:54.683040   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:54.683100   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:54.715182   75464 cri.go:89] found id: ""
	I1204 21:20:54.715211   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.715221   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:54.715228   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:54.715290   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:54.752620   75464 cri.go:89] found id: ""
	I1204 21:20:54.752655   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.752667   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:54.752674   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:54.752740   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:54.790879   75464 cri.go:89] found id: ""
	I1204 21:20:54.790907   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.790919   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:54.790926   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:54.790994   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:54.824340   75464 cri.go:89] found id: ""
	I1204 21:20:54.824380   75464 logs.go:282] 0 containers: []
	W1204 21:20:54.824393   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:54.824405   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:54.824428   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:54.874330   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:54.874365   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:20:54.887537   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:54.887565   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:54.958675   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:54.958697   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:54.958709   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:55.036909   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:55.036946   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:54.064786   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.066189   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:54.264177   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:56.264283   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:53.622703   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:55.623495   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.625197   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:57.576603   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:20:57.590013   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:57.590080   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:57.624654   75464 cri.go:89] found id: ""
	I1204 21:20:57.624690   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.624701   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:20:57.624710   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:57.624774   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:57.660404   75464 cri.go:89] found id: ""
	I1204 21:20:57.660445   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.660457   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:20:57.660464   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:57.660528   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:57.693444   75464 cri.go:89] found id: ""
	I1204 21:20:57.693472   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.693483   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:20:57.693491   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:57.693558   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:57.729361   75464 cri.go:89] found id: ""
	I1204 21:20:57.729387   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.729397   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:20:57.729403   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:57.729454   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:57.760508   75464 cri.go:89] found id: ""
	I1204 21:20:57.760535   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.760546   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:20:57.760554   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:57.760608   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:57.794110   75464 cri.go:89] found id: ""
	I1204 21:20:57.794133   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.794142   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:20:57.794151   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:57.794214   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:57.827907   75464 cri.go:89] found id: ""
	I1204 21:20:57.827936   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.827947   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:57.827954   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:20:57.828014   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:20:57.860714   75464 cri.go:89] found id: ""
	I1204 21:20:57.860742   75464 logs.go:282] 0 containers: []
	W1204 21:20:57.860753   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:20:57.860763   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:20:57.860778   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:20:57.926898   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:20:57.926926   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:57.926943   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:20:58.000298   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:20:58.000328   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:20:58.035675   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:20:58.035708   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.086663   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:20:58.086698   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.600646   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:00.613485   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:00.613550   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:00.646324   75464 cri.go:89] found id: ""
	I1204 21:21:00.646349   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.646357   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:21:00.646362   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:00.646417   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:00.675779   75464 cri.go:89] found id: ""
	I1204 21:21:00.675802   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.675814   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:21:00.675821   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:00.675874   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:00.706244   75464 cri.go:89] found id: ""
	I1204 21:21:00.706264   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.706272   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:21:00.706278   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:00.706334   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:00.738086   75464 cri.go:89] found id: ""
	I1204 21:21:00.738114   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.738126   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:21:00.738134   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:00.738195   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:00.768646   75464 cri.go:89] found id: ""
	I1204 21:21:00.768671   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.768682   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:21:00.768690   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:00.768750   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:00.797939   75464 cri.go:89] found id: ""
	I1204 21:21:00.797960   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.797968   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:21:00.797973   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:00.798016   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:00.831928   75464 cri.go:89] found id: ""
	I1204 21:21:00.831959   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.831969   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:00.831977   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:21:00.832042   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:21:00.868462   75464 cri.go:89] found id: ""
	I1204 21:21:00.868489   75464 logs.go:282] 0 containers: []
	W1204 21:21:00.868498   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:21:00.868506   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.868518   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.881721   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.881745   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:21:00.949263   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:21:00.949290   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:00.949307   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:01.031940   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:21:01.031990   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:01.070545   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:01.070577   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:20:58.565420   75137 pod_ready.go:103] pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace has status "Ready":"False"
	I1204 21:20:59.064856   75137 pod_ready.go:82] duration metric: took 4m0.006397932s for pod "metrics-server-6867b74b74-9vlcd" in "kube-system" namespace to be "Ready" ...
	E1204 21:20:59.064881   75137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1204 21:20:59.064889   75137 pod_ready.go:39] duration metric: took 4m8.671233417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:20:59.064904   75137 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:20:59.064929   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:20:59.064974   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:20:59.119318   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:20:59.119340   75137 cri.go:89] found id: ""
	I1204 21:20:59.119347   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:20:59.119421   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.125106   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:20:59.125184   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:20:59.159498   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.159519   75137 cri.go:89] found id: ""
	I1204 21:20:59.159526   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:20:59.159572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.163228   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:20:59.163302   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:20:59.198005   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:20:59.198031   75137 cri.go:89] found id: ""
	I1204 21:20:59.198039   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:20:59.198083   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.202213   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:20:59.202280   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:20:59.236775   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.236796   75137 cri.go:89] found id: ""
	I1204 21:20:59.236803   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:20:59.236852   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.241518   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:20:59.241600   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:20:59.279894   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:59.279924   75137 cri.go:89] found id: ""
	I1204 21:20:59.279934   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:20:59.279990   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.284325   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:20:59.284394   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:20:59.328082   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.328107   75137 cri.go:89] found id: ""
	I1204 21:20:59.328117   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:20:59.328178   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.332337   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:20:59.332415   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:20:59.368110   75137 cri.go:89] found id: ""
	I1204 21:20:59.368135   75137 logs.go:282] 0 containers: []
	W1204 21:20:59.368144   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:20:59.368149   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:20:59.368193   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:20:59.404941   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.404966   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:20:59.404972   75137 cri.go:89] found id: ""
	I1204 21:20:59.404980   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:20:59.405041   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.409016   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:20:59.412752   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:20:59.412783   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:20:59.463143   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:20:59.463178   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:20:59.498782   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:20:59.498812   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:20:59.555339   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:20:59.555393   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:20:59.591238   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:20:59.591267   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:00.084121   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:00.084161   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:00.154228   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:00.154265   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:00.284768   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:00.284802   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:00.328421   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:00.328452   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:00.363327   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:00.363352   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:00.402072   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:00.402101   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:00.414448   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:00.414471   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:00.446721   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:00.446747   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:20:58.265181   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.266303   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:00.124482   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:02.623096   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:03.620358   75464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.634415   75464 kubeadm.go:597] duration metric: took 4m4.247057397s to restartPrimaryControlPlane
	W1204 21:21:03.634499   75464 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:03.634530   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:02.985608   75137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:21:03.002352   75137 api_server.go:72] duration metric: took 4m20.333935611s to wait for apiserver process to appear ...
	I1204 21:21:03.002379   75137 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:21:03.002420   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:03.002475   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:03.043343   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.043387   75137 cri.go:89] found id: ""
	I1204 21:21:03.043398   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:03.043451   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.047523   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:03.047591   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:03.085843   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:03.085868   75137 cri.go:89] found id: ""
	I1204 21:21:03.085878   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:03.085936   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.089957   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:03.090008   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:03.124571   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:03.124590   75137 cri.go:89] found id: ""
	I1204 21:21:03.124597   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:03.124633   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.128183   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:03.128241   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:03.159912   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:03.159935   75137 cri.go:89] found id: ""
	I1204 21:21:03.159942   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:03.159991   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.163882   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:03.163934   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:03.202966   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.202983   75137 cri.go:89] found id: ""
	I1204 21:21:03.202990   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:03.203028   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.206601   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:03.206656   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:03.239436   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.239461   75137 cri.go:89] found id: ""
	I1204 21:21:03.239471   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:03.239522   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.243345   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:03.243409   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:03.284225   75137 cri.go:89] found id: ""
	I1204 21:21:03.284260   75137 logs.go:282] 0 containers: []
	W1204 21:21:03.284269   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:03.284275   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:03.284329   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:03.320487   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.320510   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.320514   75137 cri.go:89] found id: ""
	I1204 21:21:03.320520   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:03.320572   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.324553   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:03.328284   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:03.328307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:03.398873   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:03.398914   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:03.452146   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:03.452175   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:03.489830   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:03.489860   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:03.525086   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:03.525115   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:03.569090   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:03.569123   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:03.634685   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:03.634714   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:03.670229   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:03.670258   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:04.127440   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:04.127483   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:04.143058   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:04.143102   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:04.254811   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:04.254847   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:04.310269   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:04.310303   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:04.344331   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:04.344365   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:06.883632   75137 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1204 21:21:06.887845   75137 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1204 21:21:06.888685   75137 api_server.go:141] control plane version: v1.31.2
	I1204 21:21:06.888701   75137 api_server.go:131] duration metric: took 3.886315455s to wait for apiserver health ...
	I1204 21:21:06.888708   75137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:21:06.888730   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:21:06.888774   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:21:06.930295   75137 cri.go:89] found id: "8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:06.930316   75137 cri.go:89] found id: ""
	I1204 21:21:06.930324   75137 logs.go:282] 1 containers: [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78]
	I1204 21:21:06.930372   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.934529   75137 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:21:06.934620   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:21:06.970613   75137 cri.go:89] found id: "e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:06.970641   75137 cri.go:89] found id: ""
	I1204 21:21:06.970651   75137 logs.go:282] 1 containers: [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98]
	I1204 21:21:06.970696   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:06.974756   75137 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:21:06.974824   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:21:07.010285   75137 cri.go:89] found id: "58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:07.010310   75137 cri.go:89] found id: ""
	I1204 21:21:07.010319   75137 logs.go:282] 1 containers: [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78]
	I1204 21:21:07.010362   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:02.764114   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.764230   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.764928   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:04.623324   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:06.624331   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:08.140159   75464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.505600399s)
	I1204 21:21:08.140254   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:08.159450   75464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:21:08.169756   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:21:08.179705   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:21:08.179729   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:21:08.179783   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:21:08.188796   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:21:08.188871   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:21:08.197758   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:21:08.206347   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:21:08.206409   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:21:08.215431   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.224674   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:21:08.224737   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:21:08.234337   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:21:08.243774   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:21:08.243833   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:21:08.253498   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:21:08.321237   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:21:08.321370   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:21:08.458714   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:21:08.458866   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:21:08.459026   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:21:08.639536   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:21:08.641635   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:21:08.641739   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:21:08.641826   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:21:08.641935   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:21:08.642068   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:21:08.642175   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:21:08.642223   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:21:08.642498   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:21:08.642914   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:21:08.643567   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:21:08.644276   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:21:08.644502   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:21:08.644553   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:21:08.800107   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:21:08.920050   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:21:09.376869   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:21:09.463826   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:21:09.479167   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:21:09.479321   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:21:09.479434   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:21:09.606736   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:21:07.014564   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:21:07.014628   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:21:07.054654   75137 cri.go:89] found id: "e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.054678   75137 cri.go:89] found id: ""
	I1204 21:21:07.054686   75137 logs.go:282] 1 containers: [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df]
	I1204 21:21:07.054734   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.058625   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:21:07.058683   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:21:07.094238   75137 cri.go:89] found id: "a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:07.094280   75137 cri.go:89] found id: ""
	I1204 21:21:07.094291   75137 logs.go:282] 1 containers: [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5]
	I1204 21:21:07.094359   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.098427   75137 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:21:07.098484   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:21:07.135055   75137 cri.go:89] found id: "982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:07.135079   75137 cri.go:89] found id: ""
	I1204 21:21:07.135088   75137 logs.go:282] 1 containers: [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9]
	I1204 21:21:07.135145   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.139488   75137 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:21:07.139564   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:21:07.175963   75137 cri.go:89] found id: ""
	I1204 21:21:07.175989   75137 logs.go:282] 0 containers: []
	W1204 21:21:07.176002   75137 logs.go:284] No container was found matching "kindnet"
	I1204 21:21:07.176009   75137 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1204 21:21:07.176069   75137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1204 21:21:07.212003   75137 cri.go:89] found id: "07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.212034   75137 cri.go:89] found id: "05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:07.212040   75137 cri.go:89] found id: ""
	I1204 21:21:07.212050   75137 logs.go:282] 2 containers: [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4]
	I1204 21:21:07.212115   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.216184   75137 ssh_runner.go:195] Run: which crictl
	I1204 21:21:07.219773   75137 logs.go:123] Gathering logs for dmesg ...
	I1204 21:21:07.219803   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:21:07.233282   75137 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:21:07.233307   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 21:21:07.341593   75137 logs.go:123] Gathering logs for etcd [e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98] ...
	I1204 21:21:07.341626   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e010906440f03357d353904e06050a1dd0a6347029e1c65384a4424094ca8b98"
	I1204 21:21:07.393994   75137 logs.go:123] Gathering logs for kube-scheduler [e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df] ...
	I1204 21:21:07.394024   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c420ad52b6ec0d07a54f90119a521a099be166cf373db266b6c38d4fe6a6df"
	I1204 21:21:07.437177   75137 logs.go:123] Gathering logs for storage-provisioner [07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317] ...
	I1204 21:21:07.437205   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fb0e487f5406b205c7036e9b437d1ba77b8ed00c66c0c6c8851c2bc8b68317"
	I1204 21:21:07.469913   75137 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:21:07.469952   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:21:07.822608   75137 logs.go:123] Gathering logs for container status ...
	I1204 21:21:07.822652   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 21:21:07.861671   75137 logs.go:123] Gathering logs for kubelet ...
	I1204 21:21:07.861703   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:21:07.933833   75137 logs.go:123] Gathering logs for kube-apiserver [8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78] ...
	I1204 21:21:07.933876   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9e2903e35bf2a6554c5c537336a5ee56a48792d9dc2bddb1d85b72f2155a78"
	I1204 21:21:07.976184   75137 logs.go:123] Gathering logs for coredns [58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78] ...
	I1204 21:21:07.976215   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58b6a0437b8435b563a30351d98d26bbd245d5d1ced46cef215a723243446c78"
	I1204 21:21:08.011181   75137 logs.go:123] Gathering logs for kube-proxy [a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5] ...
	I1204 21:21:08.011206   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59819135d6bfa0606125f3e716e5e9ed8db81d539ee1e4b7678d0e9b6ab0bf5"
	I1204 21:21:08.053404   75137 logs.go:123] Gathering logs for kube-controller-manager [982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9] ...
	I1204 21:21:08.053430   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982e9c35dc47bab452a5a0abf5347f417a1d734ba7c13e77c398398188ae0de9"
	I1204 21:21:08.113301   75137 logs.go:123] Gathering logs for storage-provisioner [05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4] ...
	I1204 21:21:08.113402   75137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05e1d1192577d3e039d9ccc63b269eb859cff327b4ac985398bd3680d4705bf4"
	I1204 21:21:10.665164   75137 system_pods.go:59] 8 kube-system pods found
	I1204 21:21:10.665195   75137 system_pods.go:61] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.665200   75137 system_pods.go:61] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.665204   75137 system_pods.go:61] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.665208   75137 system_pods.go:61] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.665211   75137 system_pods.go:61] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.665215   75137 system_pods.go:61] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.665220   75137 system_pods.go:61] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.665225   75137 system_pods.go:61] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.665234   75137 system_pods.go:74] duration metric: took 3.776519738s to wait for pod list to return data ...
	I1204 21:21:10.665240   75137 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:21:10.667483   75137 default_sa.go:45] found service account: "default"
	I1204 21:21:10.667501   75137 default_sa.go:55] duration metric: took 2.252763ms for default service account to be created ...
	I1204 21:21:10.667508   75137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:21:10.671331   75137 system_pods.go:86] 8 kube-system pods found
	I1204 21:21:10.671351   75137 system_pods.go:89] "coredns-7c65d6cfc9-ct5xn" [be113b96-b21f-4fd5-8cd9-11b149a0a838] Running
	I1204 21:21:10.671356   75137 system_pods.go:89] "etcd-embed-certs-566991" [23603883-2c42-48ff-95f5-d58f04bab630] Running
	I1204 21:21:10.671360   75137 system_pods.go:89] "kube-apiserver-embed-certs-566991" [880279d0-9c57-44b1-b223-cea07fc8552e] Running
	I1204 21:21:10.671363   75137 system_pods.go:89] "kube-controller-manager-embed-certs-566991" [1512be05-cbf1-48ca-a0a5-db1e320040e0] Running
	I1204 21:21:10.671366   75137 system_pods.go:89] "kube-proxy-4fv72" [22b84591-6767-4414-9869-9d89206a03f2] Running
	I1204 21:21:10.671386   75137 system_pods.go:89] "kube-scheduler-embed-certs-566991" [1eca2a77-0f2a-4d94-992e-22acf8f54649] Running
	I1204 21:21:10.671396   75137 system_pods.go:89] "metrics-server-6867b74b74-9vlcd" [1acb08f3-e403-458d-b3e2-e32c07da6afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:21:10.671402   75137 system_pods.go:89] "storage-provisioner" [f8acdb07-16e7-457f-81b8-85416b849890] Running
	I1204 21:21:10.671414   75137 system_pods.go:126] duration metric: took 3.900254ms to wait for k8s-apps to be running ...
	I1204 21:21:10.671426   75137 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:21:10.671467   75137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:21:10.687086   75137 system_svc.go:56] duration metric: took 15.655514ms WaitForService to wait for kubelet
	I1204 21:21:10.687105   75137 kubeadm.go:582] duration metric: took 4m28.018694904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:21:10.687123   75137 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:21:10.689250   75137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:21:10.689267   75137 node_conditions.go:123] node cpu capacity is 2
	I1204 21:21:10.689277   75137 node_conditions.go:105] duration metric: took 2.149506ms to run NodePressure ...
	I1204 21:21:10.689287   75137 start.go:241] waiting for startup goroutines ...
	I1204 21:21:10.689296   75137 start.go:246] waiting for cluster config update ...
	I1204 21:21:10.689306   75137 start.go:255] writing updated cluster config ...
	I1204 21:21:10.689547   75137 ssh_runner.go:195] Run: rm -f paused
	I1204 21:21:10.738387   75137 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:21:10.740254   75137 out.go:177] * Done! kubectl is now configured to use "embed-certs-566991" cluster and "default" namespace by default
	I1204 21:21:09.608599   75464 out.go:235]   - Booting up control plane ...
	I1204 21:21:09.608729   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:21:09.613477   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:21:09.614444   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:21:09.623091   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:21:09.626249   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:21:08.765095   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:10.765470   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:09.125585   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:11.624603   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.264238   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:15.265563   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:13.624873   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:16.123483   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:17.764078   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:19.765682   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:18.626401   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:21.125606   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:22.264711   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:24.265632   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:26.764992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:23.623351   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:25.623547   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:27.624579   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:28.765133   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:31.264203   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:30.123937   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:32.623876   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:33.264732   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.765165   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:35.123685   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:37.123863   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:38.264907   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.265233   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:39.124651   75746 pod_ready.go:103] pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:40.117461   75746 pod_ready.go:82] duration metric: took 4m0.000125257s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" ...
	E1204 21:21:40.117486   75746 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lbx5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:21:40.117508   75746 pod_ready.go:39] duration metric: took 4m13.544219225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:21:40.117564   75746 kubeadm.go:597] duration metric: took 4m22.244889794s to restartPrimaryControlPlane
	W1204 21:21:40.117617   75746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:21:40.117646   75746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:21:42.764614   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:44.765642   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.627118   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:21:49.627744   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:49.627940   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:47.264873   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:49.765483   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.628283   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:21:54.628526   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:21:52.264073   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:54.264333   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:56.267410   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:21:58.764653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:00.765653   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:04.628774   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:04.629010   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:06.288530   75746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.170858751s)
	I1204 21:22:06.288613   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:06.309458   75746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:06.322805   75746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:06.336482   75746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:06.336508   75746 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:06.336558   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1204 21:22:06.348599   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:06.348656   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:06.362232   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1204 21:22:06.379259   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:06.379348   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:06.411281   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.422033   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:06.422108   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:06.432505   75746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1204 21:22:06.441734   75746 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:06.441789   75746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:06.451237   75746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:06.498732   75746 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:06.498852   75746 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:06.614368   75746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:06.614469   75746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:06.614599   75746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:06.623454   75746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:03.264992   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:05.765395   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:06.625133   75746 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:06.625245   75746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:06.625364   75746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:06.625491   75746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:06.625594   75746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:06.625712   75746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:06.625792   75746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:06.625889   75746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:06.625984   75746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:06.626100   75746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:06.626210   75746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:06.626277   75746 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:06.626348   75746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:06.726450   75746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:06.873790   75746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:07.175994   75746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:07.250702   75746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:07.320319   75746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:07.320901   75746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:07.323434   75746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:07.325316   75746 out.go:235]   - Booting up control plane ...
	I1204 21:22:07.325446   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:07.325543   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:07.326549   75746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:07.347127   75746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:07.353453   75746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:07.353587   75746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:07.488768   75746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:07.488952   75746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:07.765784   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:10.265661   75012 pod_ready.go:103] pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:11.758507   75012 pod_ready.go:82] duration metric: took 4m0.000236813s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" ...
	E1204 21:22:11.758550   75012 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wl8gw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1204 21:22:11.758567   75012 pod_ready.go:39] duration metric: took 4m14.511728433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:11.758593   75012 kubeadm.go:597] duration metric: took 4m21.138454983s to restartPrimaryControlPlane
	W1204 21:22:11.758643   75012 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 21:22:11.758668   75012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:22:07.993325   75746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.943417ms
	I1204 21:22:07.993405   75746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:12.997741   75746 kubeadm.go:310] [api-check] The API server is healthy after 5.001906934s
	I1204 21:22:13.012187   75746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:13.029586   75746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:13.062375   75746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:13.062633   75746 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-439360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:13.077941   75746 kubeadm.go:310] [bootstrap-token] Using token: 5mut2g.pz4sir8q7093cs2b
	I1204 21:22:13.079394   75746 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:13.079556   75746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:13.088458   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:13.095952   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:13.103530   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:13.106875   75746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:13.110658   75746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:13.404565   75746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:13.831997   75746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:14.404650   75746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:14.404678   75746 kubeadm.go:310] 
	I1204 21:22:14.404764   75746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:14.404789   75746 kubeadm.go:310] 
	I1204 21:22:14.404894   75746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:14.404903   75746 kubeadm.go:310] 
	I1204 21:22:14.404930   75746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:14.404981   75746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:14.405060   75746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:14.405088   75746 kubeadm.go:310] 
	I1204 21:22:14.405203   75746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:14.405216   75746 kubeadm.go:310] 
	I1204 21:22:14.405286   75746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:14.405296   75746 kubeadm.go:310] 
	I1204 21:22:14.405370   75746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:14.405487   75746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:14.405604   75746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:14.405621   75746 kubeadm.go:310] 
	I1204 21:22:14.405701   75746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:14.405772   75746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:14.405781   75746 kubeadm.go:310] 
	I1204 21:22:14.405853   75746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406000   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:14.406034   75746 kubeadm.go:310] 	--control-plane 
	I1204 21:22:14.406043   75746 kubeadm.go:310] 
	I1204 21:22:14.406112   75746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:14.406119   75746 kubeadm.go:310] 
	I1204 21:22:14.406241   75746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5mut2g.pz4sir8q7093cs2b \
	I1204 21:22:14.406397   75746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:14.407013   75746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:14.407049   75746 cni.go:84] Creating CNI manager for ""
	I1204 21:22:14.407060   75746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:14.408949   75746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:14.410361   75746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:14.420749   75746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:14.439214   75746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:14.439295   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:14.439322   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-439360 minikube.k8s.io/updated_at=2024_12_04T21_22_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=default-k8s-diff-port-439360 minikube.k8s.io/primary=true
	I1204 21:22:14.459582   75746 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:14.637938   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.138980   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:15.638942   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.138381   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:16.638528   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.138320   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:17.637995   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.138540   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:18.638754   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.138113   75746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:19.246385   75746 kubeadm.go:1113] duration metric: took 4.807160948s to wait for elevateKubeSystemPrivileges
	I1204 21:22:19.246430   75746 kubeadm.go:394] duration metric: took 5m1.419721853s to StartCluster
	I1204 21:22:19.246455   75746 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.246556   75746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:19.249082   75746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:19.249393   75746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.171 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:19.249684   75746 config.go:182] Loaded profile config "default-k8s-diff-port-439360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:19.249745   75746 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:19.249861   75746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.249884   75746 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.249896   75746 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:19.249928   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.250440   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.250479   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.250557   75746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250580   75746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-439360"
	I1204 21:22:19.250737   75746 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-439360"
	I1204 21:22:19.250757   75746 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.250765   75746 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:19.250798   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.251048   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251091   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251249   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.251294   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.251622   75746 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:19.252993   75746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:19.269179   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1204 21:22:19.269441   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1204 21:22:19.269740   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.269833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270300   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270324   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.270418   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.270418   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1204 21:22:19.270725   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270832   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.270866   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.270904   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.271326   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.271337   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.271415   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.271463   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.271686   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.272330   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.272388   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.274803   75746 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-439360"
	W1204 21:22:19.274824   75746 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:19.274853   75746 host.go:66] Checking if "default-k8s-diff-port-439360" exists ...
	I1204 21:22:19.275234   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.275267   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.291309   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1204 21:22:19.291961   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.291985   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1204 21:22:19.292400   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.292420   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.292783   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.292833   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.293039   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.293113   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I1204 21:22:19.293349   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.293362   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.293726   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.294210   75746 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:19.294239   75746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:19.294431   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.294890   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.294908   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.295400   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.295584   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.295720   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297304   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.297592   75746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:19.298747   75746 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:19.299871   75746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.299895   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:19.299916   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.301582   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:19.301598   75746 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:19.301612   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.303499   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305018   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305367   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305393   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305566   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.305775   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.305848   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.305869   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.305912   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306121   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.306313   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.306389   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.306691   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.306872   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.314163   75746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I1204 21:22:19.314569   75746 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:19.315106   75746 main.go:141] libmachine: Using API Version  1
	I1204 21:22:19.315134   75746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:19.315690   75746 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:19.315993   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetState
	I1204 21:22:19.317928   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .DriverName
	I1204 21:22:19.318171   75746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.318182   75746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:19.318195   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHHostname
	I1204 21:22:19.321203   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321582   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:46:31", ip: ""} in network mk-default-k8s-diff-port-439360: {Iface:virbr2 ExpiryTime:2024-12-04 22:17:03 +0000 UTC Type:0 Mac:52:54:00:ec:46:31 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:default-k8s-diff-port-439360 Clientid:01:52:54:00:ec:46:31}
	I1204 21:22:19.321599   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | domain default-k8s-diff-port-439360 has defined IP address 192.168.50.171 and MAC address 52:54:00:ec:46:31 in network mk-default-k8s-diff-port-439360
	I1204 21:22:19.321855   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHPort
	I1204 21:22:19.322059   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHKeyPath
	I1204 21:22:19.322226   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .GetSSHUsername
	I1204 21:22:19.322367   75746 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/default-k8s-diff-port-439360/id_rsa Username:docker}
	I1204 21:22:19.522886   75746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:19.577656   75746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586712   75746 node_ready.go:49] node "default-k8s-diff-port-439360" has status "Ready":"True"
	I1204 21:22:19.586737   75746 node_ready.go:38] duration metric: took 9.034653ms for node "default-k8s-diff-port-439360" to be "Ready" ...
	I1204 21:22:19.586745   75746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:19.595683   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:19.650177   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:19.708333   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:19.721106   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:19.721151   75746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:19.793058   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:19.793105   75746 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:19.926884   75746 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:19.926911   75746 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:20.028322   75746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:20.668142   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017919983s)
	I1204 21:22:20.668197   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668200   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668223   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668211   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668613   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668627   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668640   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668660   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.668687   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668701   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.668710   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668729   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668663   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.668789   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.668936   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.668981   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.670242   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:20.670255   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.670276   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.713659   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:20.713680   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:20.714056   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:20.714107   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:20.714076   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.064703   75746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.03633998s)
	I1204 21:22:21.064768   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.064783   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065188   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) DBG | Closing plugin on server side
	I1204 21:22:21.065197   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065212   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065220   75746 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:21.065233   75746 main.go:141] libmachine: (default-k8s-diff-port-439360) Calling .Close
	I1204 21:22:21.065472   75746 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:21.065490   75746 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:21.065502   75746 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-439360"
	I1204 21:22:21.067198   75746 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:21.068410   75746 addons.go:510] duration metric: took 1.818663539s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:21.602398   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:21.602428   75746 pod_ready.go:82] duration metric: took 2.006718822s for pod "coredns-7c65d6cfc9-4jmcl" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:21.602442   75746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.629623   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:22:24.629860   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:22:23.610993   75746 pod_ready.go:103] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:24.117785   75746 pod_ready.go:93] pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.117813   75746 pod_ready.go:82] duration metric: took 2.51536279s for pod "coredns-7c65d6cfc9-tzhgh" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.117824   75746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124800   75746 pod_ready.go:93] pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.124823   75746 pod_ready.go:82] duration metric: took 6.990353ms for pod "etcd-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.124832   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131040   75746 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:24.131061   75746 pod_ready.go:82] duration metric: took 6.222286ms for pod "kube-apiserver-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:24.131070   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.137404   75746 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:26.637414   75746 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.637440   75746 pod_ready.go:82] duration metric: took 2.506362827s for pod "kube-controller-manager-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.637452   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641759   75746 pod_ready.go:93] pod "kube-proxy-hclwt" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:26.641781   75746 pod_ready.go:82] duration metric: took 4.323262ms for pod "kube-proxy-hclwt" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:26.641793   75746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148731   75746 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:28.148753   75746 pod_ready.go:82] duration metric: took 1.50695195s for pod "kube-scheduler-default-k8s-diff-port-439360" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:28.148761   75746 pod_ready.go:39] duration metric: took 8.562005978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:28.148776   75746 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:28.148825   75746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:28.165983   75746 api_server.go:72] duration metric: took 8.916515972s to wait for apiserver process to appear ...
	I1204 21:22:28.166013   75746 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:28.166034   75746 api_server.go:253] Checking apiserver healthz at https://192.168.50.171:8444/healthz ...
	I1204 21:22:28.170244   75746 api_server.go:279] https://192.168.50.171:8444/healthz returned 200:
	ok
	I1204 21:22:28.171215   75746 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:28.171245   75746 api_server.go:131] duration metric: took 5.223023ms to wait for apiserver health ...
	I1204 21:22:28.171257   75746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:28.177524   75746 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:28.177548   75746 system_pods.go:61] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.177553   75746 system_pods.go:61] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.177557   75746 system_pods.go:61] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.177560   75746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.177563   75746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.177567   75746 system_pods.go:61] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.177570   75746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.177577   75746 system_pods.go:61] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.177582   75746 system_pods.go:61] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.177592   75746 system_pods.go:74] duration metric: took 6.322477ms to wait for pod list to return data ...
	I1204 21:22:28.177605   75746 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:28.180243   75746 default_sa.go:45] found service account: "default"
	I1204 21:22:28.180262   75746 default_sa.go:55] duration metric: took 2.648929ms for default service account to be created ...
	I1204 21:22:28.180270   75746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:28.309199   75746 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:28.309229   75746 system_pods.go:89] "coredns-7c65d6cfc9-4jmcl" [e8d193d2-0374-43a5-addd-96cdee963cc9] Running
	I1204 21:22:28.309237   75746 system_pods.go:89] "coredns-7c65d6cfc9-tzhgh" [aafae17b-5a47-4a70-bc80-94cbbca8fe38] Running
	I1204 21:22:28.309244   75746 system_pods.go:89] "etcd-default-k8s-diff-port-439360" [e4293118-8718-4722-b6b6-722896a605e9] Running
	I1204 21:22:28.309251   75746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-439360" [71be94bb-bd89-4f40-85eb-0a672f29d959] Running
	I1204 21:22:28.309257   75746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-439360" [85946631-ff2a-4203-800d-00a23a3c3408] Running
	I1204 21:22:28.309263   75746 system_pods.go:89] "kube-proxy-hclwt" [eef6c093-2186-437b-9a13-c8bafbcb4f78] Running
	I1204 21:22:28.309269   75746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-439360" [0ed74c15-2c48-4a62-8bbf-0f2a272bb119] Running
	I1204 21:22:28.309283   75746 system_pods.go:89] "metrics-server-6867b74b74-v88hj" [9b6c696c-e110-4d53-98c9-41069407b45b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:28.309295   75746 system_pods.go:89] "storage-provisioner" [aac88490-a422-4889-bff4-b180638846cf] Running
	I1204 21:22:28.309307   75746 system_pods.go:126] duration metric: took 129.030872ms to wait for k8s-apps to be running ...
	I1204 21:22:28.309320   75746 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:28.309379   75746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:28.324307   75746 system_svc.go:56] duration metric: took 14.979432ms WaitForService to wait for kubelet
	I1204 21:22:28.324336   75746 kubeadm.go:582] duration metric: took 9.074873675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:28.324353   75746 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:28.507218   75746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:28.507245   75746 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:28.507256   75746 node_conditions.go:105] duration metric: took 182.898538ms to run NodePressure ...
	I1204 21:22:28.507268   75746 start.go:241] waiting for startup goroutines ...
	I1204 21:22:28.507277   75746 start.go:246] waiting for cluster config update ...
	I1204 21:22:28.507291   75746 start.go:255] writing updated cluster config ...
	I1204 21:22:28.507595   75746 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:28.556033   75746 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:28.557819   75746 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-439360" cluster and "default" namespace by default
	I1204 21:22:37.891653   75012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.132950428s)
	I1204 21:22:37.891741   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:37.906656   75012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 21:22:37.915649   75012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:22:37.925588   75012 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:22:37.925609   75012 kubeadm.go:157] found existing configuration files:
	
	I1204 21:22:37.925655   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:22:37.934524   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:22:37.934575   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:22:37.943390   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:22:37.951745   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:22:37.951797   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:22:37.960501   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.969208   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:22:37.969254   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:22:37.978350   75012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:22:37.986861   75012 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:22:37.986930   75012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:22:37.995584   75012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:22:38.047149   75012 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 21:22:38.047224   75012 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:22:38.155964   75012 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:22:38.156086   75012 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:22:38.156215   75012 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 21:22:38.164743   75012 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:22:38.166662   75012 out.go:235]   - Generating certificates and keys ...
	I1204 21:22:38.166755   75012 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:22:38.166837   75012 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:22:38.166935   75012 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:22:38.167045   75012 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:22:38.167154   75012 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:22:38.167230   75012 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:22:38.167325   75012 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:22:38.167446   75012 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:22:38.169398   75012 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:22:38.169495   75012 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:22:38.169530   75012 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:22:38.169602   75012 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:22:38.350215   75012 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:22:38.469586   75012 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 21:22:38.636991   75012 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:22:38.883785   75012 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:22:39.014632   75012 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:22:39.015041   75012 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:22:39.017806   75012 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:22:39.019631   75012 out.go:235]   - Booting up control plane ...
	I1204 21:22:39.019760   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:22:39.019831   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:22:39.019895   75012 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:22:39.037352   75012 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:22:39.044419   75012 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:22:39.044489   75012 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:22:39.166636   75012 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 21:22:39.166782   75012 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 21:22:39.667748   75012 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.068181ms
	I1204 21:22:39.667876   75012 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 21:22:44.669497   75012 kubeadm.go:310] [api-check] The API server is healthy after 5.001931003s
	I1204 21:22:44.682282   75012 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 21:22:44.700056   75012 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 21:22:44.745563   75012 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 21:22:44.745769   75012 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-534766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 21:22:44.761584   75012 kubeadm.go:310] [bootstrap-token] Using token: 5m2kn8.vv0jgg4evfqo8hls
	I1204 21:22:44.762802   75012 out.go:235]   - Configuring RBAC rules ...
	I1204 21:22:44.762937   75012 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 21:22:44.770305   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 21:22:44.787448   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 21:22:44.799071   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 21:22:44.809995   75012 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 21:22:44.818871   75012 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 21:22:45.078465   75012 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 21:22:45.505737   75012 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 21:22:46.080197   75012 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 21:22:46.082632   75012 kubeadm.go:310] 
	I1204 21:22:46.082728   75012 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 21:22:46.082738   75012 kubeadm.go:310] 
	I1204 21:22:46.082852   75012 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 21:22:46.082877   75012 kubeadm.go:310] 
	I1204 21:22:46.082913   75012 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 21:22:46.083002   75012 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 21:22:46.083084   75012 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 21:22:46.083094   75012 kubeadm.go:310] 
	I1204 21:22:46.083188   75012 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 21:22:46.083198   75012 kubeadm.go:310] 
	I1204 21:22:46.083270   75012 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 21:22:46.083280   75012 kubeadm.go:310] 
	I1204 21:22:46.083365   75012 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 21:22:46.083505   75012 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 21:22:46.083603   75012 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 21:22:46.083612   75012 kubeadm.go:310] 
	I1204 21:22:46.083722   75012 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 21:22:46.083831   75012 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 21:22:46.083844   75012 kubeadm.go:310] 
	I1204 21:22:46.083955   75012 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084090   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 \
	I1204 21:22:46.084132   75012 kubeadm.go:310] 	--control-plane 
	I1204 21:22:46.084143   75012 kubeadm.go:310] 
	I1204 21:22:46.084271   75012 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 21:22:46.084285   75012 kubeadm.go:310] 
	I1204 21:22:46.084381   75012 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5m2kn8.vv0jgg4evfqo8hls \
	I1204 21:22:46.084540   75012 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6c412452b2f7a7f77671bb9d85a4b8ec4f4a1a6e222bdddc374e9fac186ce90 
	I1204 21:22:46.085547   75012 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:22:46.085585   75012 cni.go:84] Creating CNI manager for ""
	I1204 21:22:46.085601   75012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 21:22:46.087147   75012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 21:22:46.088445   75012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 21:22:46.099655   75012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 21:22:46.118054   75012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 21:22:46.118167   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.118199   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-534766 minikube.k8s.io/updated_at=2024_12_04T21_22_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=no-preload-534766 minikube.k8s.io/primary=true
	I1204 21:22:46.314262   75012 ops.go:34] apiserver oom_adj: -16
	I1204 21:22:46.314459   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:46.814509   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.315367   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:47.814575   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.314571   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:48.815342   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.315465   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.814618   75012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 21:22:49.924235   75012 kubeadm.go:1113] duration metric: took 3.806131818s to wait for elevateKubeSystemPrivileges
	I1204 21:22:49.924281   75012 kubeadm.go:394] duration metric: took 4m59.352297592s to StartCluster
	I1204 21:22:49.924304   75012 settings.go:142] acquiring lock: {Name:mk51df5708ef0b8fe125ead566b8d3e857234e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.924410   75012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:22:49.926022   75012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19985-10581/kubeconfig: {Name:mk338cb7deb77a607d0c199d94a556bdfd19bef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 21:22:49.926265   75012 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 21:22:49.926337   75012 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 21:22:49.926474   75012 addons.go:69] Setting storage-provisioner=true in profile "no-preload-534766"
	I1204 21:22:49.926483   75012 config.go:182] Loaded profile config "no-preload-534766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:22:49.926496   75012 addons.go:234] Setting addon storage-provisioner=true in "no-preload-534766"
	W1204 21:22:49.926508   75012 addons.go:243] addon storage-provisioner should already be in state true
	I1204 21:22:49.926505   75012 addons.go:69] Setting default-storageclass=true in profile "no-preload-534766"
	I1204 21:22:49.926531   75012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-534766"
	I1204 21:22:49.926546   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926541   75012 addons.go:69] Setting metrics-server=true in profile "no-preload-534766"
	I1204 21:22:49.926576   75012 addons.go:234] Setting addon metrics-server=true in "no-preload-534766"
	W1204 21:22:49.926590   75012 addons.go:243] addon metrics-server should already be in state true
	I1204 21:22:49.926625   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.926930   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926954   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926970   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.926955   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.926987   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927051   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.927780   75012 out.go:177] * Verifying Kubernetes components...
	I1204 21:22:49.929162   75012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 21:22:49.942741   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1204 21:22:49.943289   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.943868   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.943895   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.944251   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.944864   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.944913   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.946622   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1204 21:22:49.946621   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1204 21:22:49.947114   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947241   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.947744   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947765   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.947882   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.947906   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.948103   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948432   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.948645   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.948791   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.948837   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.952327   75012 addons.go:234] Setting addon default-storageclass=true in "no-preload-534766"
	W1204 21:22:49.952346   75012 addons.go:243] addon default-storageclass should already be in state true
	I1204 21:22:49.952369   75012 host.go:66] Checking if "no-preload-534766" exists ...
	I1204 21:22:49.952601   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.952630   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.961451   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I1204 21:22:49.961850   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.962443   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.962464   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.962850   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.963027   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.964897   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.968079   75012 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 21:22:49.968412   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I1204 21:22:49.968752   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1204 21:22:49.968941   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969158   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:49.969388   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 21:22:49.969407   75012 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 21:22:49.969427   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.969542   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969565   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969628   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:49.969642   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:49.969957   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970113   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:49.970170   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:49.970694   75012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19985-10581/.minikube/bin/docker-machine-driver-kvm2
	I1204 21:22:49.970730   75012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 21:22:49.972032   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:49.973317   75012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 21:22:49.973481   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.973907   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.973928   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.974221   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.974387   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.974545   75012 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:49.974560   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 21:22:49.974577   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:49.974673   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.974849   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:49.977139   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977453   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:49.977472   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:49.977620   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:49.977765   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:49.977906   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:49.978085   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.003630   75012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1204 21:22:50.004065   75012 main.go:141] libmachine: () Calling .GetVersion
	I1204 21:22:50.004600   75012 main.go:141] libmachine: Using API Version  1
	I1204 21:22:50.004624   75012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 21:22:50.004954   75012 main.go:141] libmachine: () Calling .GetMachineName
	I1204 21:22:50.005133   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetState
	I1204 21:22:50.006743   75012 main.go:141] libmachine: (no-preload-534766) Calling .DriverName
	I1204 21:22:50.006952   75012 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.006969   75012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 21:22:50.006986   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHHostname
	I1204 21:22:50.009741   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010114   75012 main.go:141] libmachine: (no-preload-534766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:f1:d6", ip: ""} in network mk-no-preload-534766: {Iface:virbr4 ExpiryTime:2024-12-04 22:17:23 +0000 UTC Type:0 Mac:52:54:00:85:f1:d6 Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-534766 Clientid:01:52:54:00:85:f1:d6}
	I1204 21:22:50.010169   75012 main.go:141] libmachine: (no-preload-534766) DBG | domain no-preload-534766 has defined IP address 192.168.61.174 and MAC address 52:54:00:85:f1:d6 in network mk-no-preload-534766
	I1204 21:22:50.010347   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHPort
	I1204 21:22:50.010522   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHKeyPath
	I1204 21:22:50.010699   75012 main.go:141] libmachine: (no-preload-534766) Calling .GetSSHUsername
	I1204 21:22:50.010868   75012 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/no-preload-534766/id_rsa Username:docker}
	I1204 21:22:50.114285   75012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 21:22:50.136173   75012 node_ready.go:35] waiting up to 6m0s for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146304   75012 node_ready.go:49] node "no-preload-534766" has status "Ready":"True"
	I1204 21:22:50.146333   75012 node_ready.go:38] duration metric: took 10.115051ms for node "no-preload-534766" to be "Ready" ...
	I1204 21:22:50.146344   75012 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:50.156660   75012 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:50.205793   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 21:22:50.222880   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 21:22:50.222904   75012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 21:22:50.259999   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 21:22:50.260022   75012 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 21:22:50.271653   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 21:22:50.295271   75012 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.295301   75012 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 21:22:50.371390   75012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 21:22:50.923825   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923850   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.923889   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.923916   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924309   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924319   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924327   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.924328   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924335   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.924347   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924354   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924357   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.924367   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.924574   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.924590   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926209   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.926224   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:50.926254   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943266   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:50.943283   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:50.943613   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:50.943626   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:50.943633   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434449   75012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.063018778s)
	I1204 21:22:51.434501   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434516   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434935   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.434961   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.434973   75012 main.go:141] libmachine: Making call to close driver server
	I1204 21:22:51.434982   75012 main.go:141] libmachine: (no-preload-534766) Calling .Close
	I1204 21:22:51.434989   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435279   75012 main.go:141] libmachine: (no-preload-534766) DBG | Closing plugin on server side
	I1204 21:22:51.435314   75012 main.go:141] libmachine: Successfully made call to close driver server
	I1204 21:22:51.435327   75012 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 21:22:51.435338   75012 addons.go:475] Verifying addon metrics-server=true in "no-preload-534766"
	I1204 21:22:51.437110   75012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1204 21:22:51.438430   75012 addons.go:510] duration metric: took 1.51209932s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1204 21:22:52.163208   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:54.166268   75012 pod_ready.go:103] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:55.663847   75012 pod_ready.go:93] pod "etcd-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:55.663873   75012 pod_ready.go:82] duration metric: took 5.507184169s for pod "etcd-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:55.663883   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:57.669991   75012 pod_ready.go:103] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"False"
	I1204 21:22:58.669891   75012 pod_ready.go:93] pod "kube-apiserver-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.669913   75012 pod_ready.go:82] duration metric: took 3.006024495s for pod "kube-apiserver-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.669923   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674408   75012 pod_ready.go:93] pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.674431   75012 pod_ready.go:82] duration metric: took 4.502433ms for pod "kube-controller-manager-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.674441   75012 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678736   75012 pod_ready.go:93] pod "kube-scheduler-no-preload-534766" in "kube-system" namespace has status "Ready":"True"
	I1204 21:22:58.678761   75012 pod_ready.go:82] duration metric: took 4.313122ms for pod "kube-scheduler-no-preload-534766" in "kube-system" namespace to be "Ready" ...
	I1204 21:22:58.678771   75012 pod_ready.go:39] duration metric: took 8.532413995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 21:22:58.678791   75012 api_server.go:52] waiting for apiserver process to appear ...
	I1204 21:22:58.678847   75012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 21:22:58.695623   75012 api_server.go:72] duration metric: took 8.769328765s to wait for apiserver process to appear ...
	I1204 21:22:58.695654   75012 api_server.go:88] waiting for apiserver healthz status ...
	I1204 21:22:58.695675   75012 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I1204 21:22:58.699892   75012 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I1204 21:22:58.700759   75012 api_server.go:141] control plane version: v1.31.2
	I1204 21:22:58.700776   75012 api_server.go:131] duration metric: took 5.115741ms to wait for apiserver health ...
	I1204 21:22:58.700783   75012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 21:22:58.705822   75012 system_pods.go:59] 9 kube-system pods found
	I1204 21:22:58.705845   75012 system_pods.go:61] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.705850   75012 system_pods.go:61] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.705854   75012 system_pods.go:61] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.705858   75012 system_pods.go:61] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.705862   75012 system_pods.go:61] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.705865   75012 system_pods.go:61] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.705870   75012 system_pods.go:61] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.705876   75012 system_pods.go:61] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.705883   75012 system_pods.go:61] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.705888   75012 system_pods.go:74] duration metric: took 5.100414ms to wait for pod list to return data ...
	I1204 21:22:58.705897   75012 default_sa.go:34] waiting for default service account to be created ...
	I1204 21:22:58.708729   75012 default_sa.go:45] found service account: "default"
	I1204 21:22:58.708746   75012 default_sa.go:55] duration metric: took 2.844325ms for default service account to be created ...
	I1204 21:22:58.708753   75012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 21:22:58.713584   75012 system_pods.go:86] 9 kube-system pods found
	I1204 21:22:58.713605   75012 system_pods.go:89] "coredns-7c65d6cfc9-9llkt" [adc8b2dd-be84-4314-ae3c-cfe94cc78489] Running
	I1204 21:22:58.713610   75012 system_pods.go:89] "coredns-7c65d6cfc9-zq88f" [b4b818bf-71d4-4522-8d3f-15c878eb7e37] Running
	I1204 21:22:58.713614   75012 system_pods.go:89] "etcd-no-preload-534766" [dfebd8ce-bf78-4219-a860-7e0275651a27] Running
	I1204 21:22:58.713617   75012 system_pods.go:89] "kube-apiserver-no-preload-534766" [6d8632fe-4a7d-48f0-9de5-bbc8efa027cd] Running
	I1204 21:22:58.713623   75012 system_pods.go:89] "kube-controller-manager-no-preload-534766" [1fcb311c-17ee-40ab-8126-3f9aeb565c23] Running
	I1204 21:22:58.713627   75012 system_pods.go:89] "kube-proxy-z2n69" [ea030ab5-1808-4037-b153-e751d66f3882] Running
	I1204 21:22:58.713630   75012 system_pods.go:89] "kube-scheduler-no-preload-534766" [ee51023a-795d-49f9-ae03-535038decf43] Running
	I1204 21:22:58.713636   75012 system_pods.go:89] "metrics-server-6867b74b74-24lj8" [1e4467c4-301a-4820-ab89-e1f0ba78f62d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1204 21:22:58.713640   75012 system_pods.go:89] "storage-provisioner" [38fa420a-4372-41b4-9853-64796baa65d9] Running
	I1204 21:22:58.713649   75012 system_pods.go:126] duration metric: took 4.892413ms to wait for k8s-apps to be running ...
	I1204 21:22:58.713655   75012 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 21:22:58.713694   75012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:22:58.727642   75012 system_svc.go:56] duration metric: took 13.980011ms WaitForService to wait for kubelet
	I1204 21:22:58.727667   75012 kubeadm.go:582] duration metric: took 8.80137456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 21:22:58.727683   75012 node_conditions.go:102] verifying NodePressure condition ...
	I1204 21:22:58.730401   75012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 21:22:58.730424   75012 node_conditions.go:123] node cpu capacity is 2
	I1204 21:22:58.730437   75012 node_conditions.go:105] duration metric: took 2.748662ms to run NodePressure ...
	I1204 21:22:58.730450   75012 start.go:241] waiting for startup goroutines ...
	I1204 21:22:58.730460   75012 start.go:246] waiting for cluster config update ...
	I1204 21:22:58.730472   75012 start.go:255] writing updated cluster config ...
	I1204 21:22:58.730773   75012 ssh_runner.go:195] Run: rm -f paused
	I1204 21:22:58.776977   75012 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 21:22:58.778544   75012 out.go:177] * Done! kubectl is now configured to use "no-preload-534766" cluster and "default" namespace by default
	I1204 21:23:04.631416   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:23:04.631710   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:23:04.631725   75464 kubeadm.go:310] 
	I1204 21:23:04.631799   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:23:04.631878   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:23:04.631890   75464 kubeadm.go:310] 
	I1204 21:23:04.631961   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:23:04.632036   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:23:04.632198   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:23:04.632215   75464 kubeadm.go:310] 
	I1204 21:23:04.632383   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:23:04.632461   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:23:04.632516   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:23:04.632528   75464 kubeadm.go:310] 
	I1204 21:23:04.632675   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:23:04.632796   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:23:04.632815   75464 kubeadm.go:310] 
	I1204 21:23:04.632974   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:23:04.633074   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:23:04.633176   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:23:04.633304   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:23:04.633322   75464 kubeadm.go:310] 
	I1204 21:23:04.634981   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:23:04.635061   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:23:04.635118   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1204 21:23:04.635222   75464 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1204 21:23:04.635272   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1204 21:23:05.103010   75464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 21:23:05.116784   75464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 21:23:05.126269   75464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 21:23:05.126290   75464 kubeadm.go:157] found existing configuration files:
	
	I1204 21:23:05.126331   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 21:23:05.134867   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 21:23:05.134919   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 21:23:05.143682   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 21:23:05.151701   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 21:23:05.151766   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 21:23:05.160033   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.168125   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 21:23:05.168175   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 21:23:05.176976   75464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 21:23:05.185549   75464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 21:23:05.185592   75464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 21:23:05.194156   75464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 21:23:05.394966   75464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 21:25:01.433781   75464 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1204 21:25:01.433941   75464 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1204 21:25:01.434011   75464 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1204 21:25:01.434069   75464 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 21:25:01.434170   75464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 21:25:01.434315   75464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 21:25:01.434431   75464 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 21:25:01.434514   75464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 21:25:01.436334   75464 out.go:235]   - Generating certificates and keys ...
	I1204 21:25:01.436408   75464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 21:25:01.436482   75464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 21:25:01.436550   75464 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 21:25:01.436644   75464 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 21:25:01.436745   75464 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 21:25:01.436819   75464 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 21:25:01.436885   75464 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 21:25:01.436942   75464 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 21:25:01.437004   75464 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 21:25:01.437068   75464 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 21:25:01.437101   75464 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 21:25:01.437150   75464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 21:25:01.437193   75464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 21:25:01.437239   75464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 21:25:01.437309   75464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 21:25:01.437370   75464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 21:25:01.437458   75464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 21:25:01.437568   75464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 21:25:01.437636   75464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 21:25:01.437701   75464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 21:25:01.439149   75464 out.go:235]   - Booting up control plane ...
	I1204 21:25:01.439251   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 21:25:01.439347   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 21:25:01.439457   75464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 21:25:01.439531   75464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 21:25:01.439672   75464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 21:25:01.439736   75464 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1204 21:25:01.439798   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.439966   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440044   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440205   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440259   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440487   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440578   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440768   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.440835   75464 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1204 21:25:01.440991   75464 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1204 21:25:01.441006   75464 kubeadm.go:310] 
	I1204 21:25:01.441043   75464 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1204 21:25:01.441078   75464 kubeadm.go:310] 		timed out waiting for the condition
	I1204 21:25:01.441084   75464 kubeadm.go:310] 
	I1204 21:25:01.441114   75464 kubeadm.go:310] 	This error is likely caused by:
	I1204 21:25:01.441143   75464 kubeadm.go:310] 		- The kubelet is not running
	I1204 21:25:01.441233   75464 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1204 21:25:01.441242   75464 kubeadm.go:310] 
	I1204 21:25:01.441335   75464 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1204 21:25:01.441369   75464 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1204 21:25:01.441403   75464 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1204 21:25:01.441410   75464 kubeadm.go:310] 
	I1204 21:25:01.441503   75464 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1204 21:25:01.441602   75464 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1204 21:25:01.441610   75464 kubeadm.go:310] 
	I1204 21:25:01.441705   75464 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1204 21:25:01.441779   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1204 21:25:01.441857   75464 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1204 21:25:01.441934   75464 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1204 21:25:01.441961   75464 kubeadm.go:310] 
	I1204 21:25:01.442011   75464 kubeadm.go:394] duration metric: took 8m2.105750462s to StartCluster
	I1204 21:25:01.442050   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 21:25:01.442119   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 21:25:01.484552   75464 cri.go:89] found id: ""
	I1204 21:25:01.484582   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.484606   75464 logs.go:284] No container was found matching "kube-apiserver"
	I1204 21:25:01.484614   75464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 21:25:01.484681   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 21:25:01.517972   75464 cri.go:89] found id: ""
	I1204 21:25:01.517999   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.518007   75464 logs.go:284] No container was found matching "etcd"
	I1204 21:25:01.518013   75464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 21:25:01.518078   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 21:25:01.555068   75464 cri.go:89] found id: ""
	I1204 21:25:01.555096   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.555104   75464 logs.go:284] No container was found matching "coredns"
	I1204 21:25:01.555110   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 21:25:01.555163   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 21:25:01.595425   75464 cri.go:89] found id: ""
	I1204 21:25:01.595456   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.595478   75464 logs.go:284] No container was found matching "kube-scheduler"
	I1204 21:25:01.595486   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 21:25:01.595553   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 21:25:01.634608   75464 cri.go:89] found id: ""
	I1204 21:25:01.634638   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.634648   75464 logs.go:284] No container was found matching "kube-proxy"
	I1204 21:25:01.634656   75464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 21:25:01.634721   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 21:25:01.668685   75464 cri.go:89] found id: ""
	I1204 21:25:01.668724   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.668737   75464 logs.go:284] No container was found matching "kube-controller-manager"
	I1204 21:25:01.668746   75464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 21:25:01.668810   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 21:25:01.701497   75464 cri.go:89] found id: ""
	I1204 21:25:01.701531   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.701543   75464 logs.go:284] No container was found matching "kindnet"
	I1204 21:25:01.701550   75464 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1204 21:25:01.701612   75464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1204 21:25:01.735347   75464 cri.go:89] found id: ""
	I1204 21:25:01.735401   75464 logs.go:282] 0 containers: []
	W1204 21:25:01.735413   75464 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1204 21:25:01.735429   75464 logs.go:123] Gathering logs for kubelet ...
	I1204 21:25:01.735448   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 21:25:01.785951   75464 logs.go:123] Gathering logs for dmesg ...
	I1204 21:25:01.785994   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 21:25:01.800795   75464 logs.go:123] Gathering logs for describe nodes ...
	I1204 21:25:01.800822   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1204 21:25:01.878636   75464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1204 21:25:01.878663   75464 logs.go:123] Gathering logs for CRI-O ...
	I1204 21:25:01.878675   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 21:25:01.982526   75464 logs.go:123] Gathering logs for container status ...
	I1204 21:25:01.982563   75464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1204 21:25:02.037006   75464 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1204 21:25:02.037075   75464 out.go:270] * 
	W1204 21:25:02.037160   75464 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.037181   75464 out.go:270] * 
	W1204 21:25:02.038380   75464 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 21:25:02.041871   75464 out.go:201] 
	W1204 21:25:02.042973   75464 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1204 21:25:02.043035   75464 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1204 21:25:02.043065   75464 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1204 21:25:02.044498   75464 out.go:201] 
	
	
	==> CRI-O <==
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.758538288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348176758512143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f104e24-dd92-4140-bf5a-8220eda63931 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.759104595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92473a8a-9330-400b-b8c2-65729bf4bba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.759207246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92473a8a-9330-400b-b8c2-65729bf4bba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.759250409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92473a8a-9330-400b-b8c2-65729bf4bba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.789575689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=391b49ac-185d-454d-8847-9b4ef4e4d404 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.789647441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=391b49ac-185d-454d-8847-9b4ef4e4d404 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.790843057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=784d6d60-84c4-4c1b-9695-2479deb1a1a7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.791292369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348176791265028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=784d6d60-84c4-4c1b-9695-2479deb1a1a7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.791967571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a1c3eb5-0455-48d1-ae78-e178cc45ed80 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.792015789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a1c3eb5-0455-48d1-ae78-e178cc45ed80 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.792093237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9a1c3eb5-0455-48d1-ae78-e178cc45ed80 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.827496317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f57f08a-c95a-4a87-8ec2-e7fac9692564 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.827570066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f57f08a-c95a-4a87-8ec2-e7fac9692564 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.829084684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12869ea0-3dc6-4b1c-a92d-44070b7c4f86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.829617777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348176829589559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12869ea0-3dc6-4b1c-a92d-44070b7c4f86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.830258455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e87bf5d-9acf-40cb-9340-d6c99234bb44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.830316714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e87bf5d-9acf-40cb-9340-d6c99234bb44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.830349622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0e87bf5d-9acf-40cb-9340-d6c99234bb44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.861816587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0577be77-8389-442e-a728-9735c7259e69 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.861898316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0577be77-8389-442e-a728-9735c7259e69 name=/runtime.v1.RuntimeService/Version
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.863308011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82913b5a-be53-468b-86fb-942f0f4ec451 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.863703111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733348176863680045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82913b5a-be53-468b-86fb-942f0f4ec451 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.864228562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19ad5767-3eef-4281-b80e-3662d376fd54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.864283772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19ad5767-3eef-4281-b80e-3662d376fd54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 04 21:36:16 old-k8s-version-082859 crio[624]: time="2024-12-04 21:36:16.864315580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19ad5767-3eef-4281-b80e-3662d376fd54 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 4 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063766] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.986133] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929597] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577556] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.172483] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.056938] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054201] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.210243] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.123977] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.239654] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +6.083108] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.059229] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 4 21:17] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.469298] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 4 21:21] systemd-fstab-generator[5120]: Ignoring "noauto" option for root device
	[Dec 4 21:23] systemd-fstab-generator[5401]: Ignoring "noauto" option for root device
	[  +0.064984] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:36:17 up 19 min,  0 users,  load average: 0.00, 0.00, 0.03
	Linux old-k8s-version-082859 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007406f0)
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d45ef0, 0x4f0ac20, 0xc0008ff4a0, 0x1, 0xc0001000c0)
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254540, 0xc0001000c0)
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c1f390, 0xc0001d4080)
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6873]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 04 21:36:14 old-k8s-version-082859 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 04 21:36:14 old-k8s-version-082859 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 04 21:36:14 old-k8s-version-082859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Dec 04 21:36:14 old-k8s-version-082859 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 04 21:36:14 old-k8s-version-082859 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6882]: I1204 21:36:14.817567    6882 server.go:416] Version: v1.20.0
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6882]: I1204 21:36:14.817831    6882 server.go:837] Client rotation is on, will bootstrap in background
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6882]: I1204 21:36:14.819783    6882 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6882]: W1204 21:36:14.820827    6882 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 04 21:36:14 old-k8s-version-082859 kubelet[6882]: I1204 21:36:14.820856    6882 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 2 (235.450294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-082859" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (129.70s)

                                                
                                    

Test pass (243/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 4.75
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 54.78
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 126.06
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 16.02
37 TestAddons/parallel/InspektorGadget 11.11
40 TestAddons/parallel/CSI 55.79
41 TestAddons/parallel/Headlamp 21.07
42 TestAddons/parallel/CloudSpanner 6.63
43 TestAddons/parallel/LocalPath 53.26
44 TestAddons/parallel/NvidiaDevicePlugin 7.27
45 TestAddons/parallel/Yakd 11.85
48 TestCertOptions 86.06
49 TestCertExpiration 282.71
51 TestForceSystemdFlag 86.52
52 TestForceSystemdEnv 90.86
54 TestKVMDriverInstallOrUpdate 4.14
58 TestErrorSpam/setup 45.45
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.67
63 TestErrorSpam/stop 4.58
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.36
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.54
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
75 TestFunctional/serial/CacheCmd/cache/add_local 1.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 34.65
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.38
86 TestFunctional/serial/LogsFileCmd 1.32
87 TestFunctional/serial/InvalidService 4.6
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 15.41
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 8.55
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 42.05
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.35
103 TestFunctional/parallel/MySQL 24.08
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.58
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.19
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
116 TestFunctional/parallel/ProfileCmd/profile_list 0.37
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
118 TestFunctional/parallel/MountCmd/any-port 8.76
119 TestFunctional/parallel/MountCmd/specific-port 1.91
120 TestFunctional/parallel/ServiceCmd/List 1.01
121 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
124 TestFunctional/parallel/ServiceCmd/Format 0.31
125 TestFunctional/parallel/ServiceCmd/URL 0.31
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.55
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.47
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.57
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.51
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.47
132 TestFunctional/parallel/ImageCommands/ImageBuild 9.78
133 TestFunctional/parallel/ImageCommands/Setup 1.72
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.47
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.74
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 196.31
160 TestMultiControlPlane/serial/DeployApp 6.82
161 TestMultiControlPlane/serial/PingHostFromPods 1.16
162 TestMultiControlPlane/serial/AddWorkerNode 55.73
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
165 TestMultiControlPlane/serial/CopyFile 12.57
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.5
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/RestartCluster 326.09
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
176 TestMultiControlPlane/serial/AddSecondaryNode 84.68
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 56.1
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.65
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.61
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.66
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
209 TestMainNoArgs 0.04
210 TestMinikubeProfile 91.75
213 TestMountStart/serial/StartWithMountFirst 27.35
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 24.16
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.68
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 21.13
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 116.12
225 TestMultiNode/serial/DeployApp2Nodes 6.58
226 TestMultiNode/serial/PingHostFrom2Pods 0.74
227 TestMultiNode/serial/AddNode 47.27
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.55
230 TestMultiNode/serial/CopyFile 6.98
231 TestMultiNode/serial/StopNode 2.2
232 TestMultiNode/serial/StartAfterStop 38.16
234 TestMultiNode/serial/DeleteNode 2.18
236 TestMultiNode/serial/RestartMultiNode 180.73
237 TestMultiNode/serial/ValidateNameConflict 44.51
244 TestScheduledStopUnix 112.27
248 TestRunningBinaryUpgrade 174.33
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/StartWithK8s 112.39
255 TestNoKubernetes/serial/StartWithStopK8s 40.25
256 TestNoKubernetes/serial/Start 46.53
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 1.78
259 TestNoKubernetes/serial/Stop 1.27
260 TestNoKubernetes/serial/StartNoArgs 25.22
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
262 TestStoppedBinaryUpgrade/Setup 0.54
263 TestStoppedBinaryUpgrade/Upgrade 92.83
271 TestNetworkPlugins/group/false 2.99
283 TestPause/serial/Start 75.5
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
285 TestNetworkPlugins/group/auto/Start 79.14
287 TestNetworkPlugins/group/custom-flannel/Start 82.44
288 TestNetworkPlugins/group/auto/KubeletFlags 0.19
289 TestNetworkPlugins/group/auto/NetCatPod 11.23
290 TestNetworkPlugins/group/auto/DNS 26.14
291 TestNetworkPlugins/group/auto/Localhost 0.12
292 TestNetworkPlugins/group/auto/HairPin 0.13
293 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
294 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
295 TestNetworkPlugins/group/kindnet/Start 63.63
296 TestNetworkPlugins/group/custom-flannel/DNS 0.2
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
299 TestNetworkPlugins/group/flannel/Start 80.09
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
302 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
303 TestNetworkPlugins/group/enable-default-cni/Start 59.75
304 TestNetworkPlugins/group/kindnet/DNS 0.17
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.13
307 TestNetworkPlugins/group/calico/Start 80.07
308 TestNetworkPlugins/group/flannel/ControllerPod 6.01
309 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
310 TestNetworkPlugins/group/flannel/NetCatPod 11.92
311 TestNetworkPlugins/group/flannel/DNS 0.16
312 TestNetworkPlugins/group/flannel/Localhost 0.12
313 TestNetworkPlugins/group/flannel/HairPin 0.12
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
316 TestNetworkPlugins/group/bridge/Start 55.56
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestNetworkPlugins/group/calico/ControllerPod 6.01
323 TestNetworkPlugins/group/calico/KubeletFlags 0.2
324 TestNetworkPlugins/group/calico/NetCatPod 12.25
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
326 TestNetworkPlugins/group/bridge/NetCatPod 14.27
327 TestNetworkPlugins/group/calico/DNS 0.21
328 TestNetworkPlugins/group/calico/Localhost 0.16
329 TestNetworkPlugins/group/calico/HairPin 0.15
330 TestNetworkPlugins/group/bridge/DNS 16.04
332 TestStartStop/group/no-preload/serial/FirstStart 104.4
333 TestNetworkPlugins/group/bridge/Localhost 0.12
334 TestNetworkPlugins/group/bridge/HairPin 0.13
336 TestStartStop/group/embed-certs/serial/FirstStart 87.01
337 TestStartStop/group/no-preload/serial/DeployApp 9.31
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.96
340 TestStartStop/group/embed-certs/serial/DeployApp 10.32
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
352 TestStartStop/group/no-preload/serial/SecondStart 687.06
353 TestStartStop/group/embed-certs/serial/SecondStart 569.04
354 TestStartStop/group/old-k8s-version/serial/Stop 5.46
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 566.21
368 TestStartStop/group/newest-cni/serial/FirstStart 48.64
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
371 TestStartStop/group/newest-cni/serial/Stop 11.3
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
373 TestStartStop/group/newest-cni/serial/SecondStart 71.24
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
377 TestStartStop/group/newest-cni/serial/Pause 2.32
x
+
TestDownloadOnly/v1.20.0/json-events (9.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-833018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-833018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.005505644s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 19:52:39.942811   17743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1204 19:52:39.942916   17743 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-833018
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-833018: exit status 85 (59.599041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-833018 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |          |
	|         | -p download-only-833018        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 19:52:30
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 19:52:30.977327   17755 out.go:345] Setting OutFile to fd 1 ...
	I1204 19:52:30.977475   17755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:30.977486   17755 out.go:358] Setting ErrFile to fd 2...
	I1204 19:52:30.977493   17755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:30.977655   17755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	W1204 19:52:30.977792   17755 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19985-10581/.minikube/config/config.json: open /home/jenkins/minikube-integration/19985-10581/.minikube/config/config.json: no such file or directory
	I1204 19:52:30.978377   17755 out.go:352] Setting JSON to true
	I1204 19:52:30.979262   17755 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2101,"bootTime":1733339850,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 19:52:30.979319   17755 start.go:139] virtualization: kvm guest
	I1204 19:52:30.981628   17755 out.go:97] [download-only-833018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1204 19:52:30.981727   17755 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 19:52:30.981760   17755 notify.go:220] Checking for updates...
	I1204 19:52:30.983095   17755 out.go:169] MINIKUBE_LOCATION=19985
	I1204 19:52:30.984438   17755 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 19:52:30.985703   17755 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:52:30.986902   17755 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:30.988077   17755 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1204 19:52:30.990195   17755 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 19:52:30.990448   17755 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 19:52:31.093799   17755 out.go:97] Using the kvm2 driver based on user configuration
	I1204 19:52:31.093825   17755 start.go:297] selected driver: kvm2
	I1204 19:52:31.093831   17755 start.go:901] validating driver "kvm2" against <nil>
	I1204 19:52:31.094155   17755 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:31.094281   17755 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19985-10581/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 19:52:31.108770   17755 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 19:52:31.108808   17755 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 19:52:31.109351   17755 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1204 19:52:31.109533   17755 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 19:52:31.109561   17755 cni.go:84] Creating CNI manager for ""
	I1204 19:52:31.109633   17755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 19:52:31.109643   17755 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 19:52:31.109698   17755 start.go:340] cluster config:
	{Name:download-only-833018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-833018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 19:52:31.109856   17755 iso.go:125] acquiring lock: {Name:mk5fb0f3f6da76e6cd812291a551e1592ef2c232 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 19:52:31.111545   17755 out.go:97] Downloading VM boot image ...
	I1204 19:52:31.111583   17755 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 19:52:34.615716   17755 out.go:97] Starting "download-only-833018" primary control-plane node in "download-only-833018" cluster
	I1204 19:52:34.615742   17755 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 19:52:34.639807   17755 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 19:52:34.639847   17755 cache.go:56] Caching tarball of preloaded images
	I1204 19:52:34.639999   17755 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 19:52:34.641749   17755 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 19:52:34.641771   17755 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1204 19:52:34.673330   17755 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-833018 host does not exist
	  To start a cluster, run: "minikube start -p download-only-833018"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-833018
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-079944 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-079944 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.745269057s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 19:52:45.009640   17743 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1204 19:52:45.009679   17743 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19985-10581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-079944
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-079944: exit status 85 (57.189449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-833018 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | -p download-only-833018        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| delete  | -p download-only-833018        | download-only-833018 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC | 04 Dec 24 19:52 UTC |
	| start   | -o=json --download-only        | download-only-079944 | jenkins | v1.34.0 | 04 Dec 24 19:52 UTC |                     |
	|         | -p download-only-079944        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 19:52:40
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 19:52:40.306080   17961 out.go:345] Setting OutFile to fd 1 ...
	I1204 19:52:40.306222   17961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:40.306233   17961 out.go:358] Setting ErrFile to fd 2...
	I1204 19:52:40.306238   17961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 19:52:40.306418   17961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 19:52:40.307029   17961 out.go:352] Setting JSON to true
	I1204 19:52:40.307911   17961 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2110,"bootTime":1733339850,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 19:52:40.308013   17961 start.go:139] virtualization: kvm guest
	I1204 19:52:40.309974   17961 out.go:97] [download-only-079944] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 19:52:40.310140   17961 notify.go:220] Checking for updates...
	I1204 19:52:40.311636   17961 out.go:169] MINIKUBE_LOCATION=19985
	I1204 19:52:40.313009   17961 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 19:52:40.314195   17961 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 19:52:40.315552   17961 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 19:52:40.316771   17961 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-079944 host does not exist
	  To start a cluster, run: "minikube start -p download-only-079944"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-079944
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 19:52:45.578716   17743 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-214166 --alsologtostderr --binary-mirror http://127.0.0.1:43213 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-214166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-214166
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (54.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-847644 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-847644 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (53.626035417s)
helpers_test.go:175: Cleaning up "offline-crio-847644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-847644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-847644: (1.152843268s)
--- PASS: TestOffline (54.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153447
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-153447: exit status 85 (57.214674ms)

                                                
                                                
-- stdout --
	* Profile "addons-153447" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153447"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153447
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-153447: exit status 85 (57.919149ms)

                                                
                                                
-- stdout --
	* Profile "addons-153447" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153447"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (126.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-153447 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-153447 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.056363982s)
--- PASS: TestAddons/Setup (126.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-153447 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-153447 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-153447 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-153447 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d848bd2e-9b52-4694-a820-ad62fd4c3be4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d848bd2e-9b52-4694-a820-ad62fd4c3be4] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003882395s
addons_test.go:633: (dbg) Run:  kubectl --context addons-153447 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-153447 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-153447 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.527777ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z8xlj" [cf078efa-efba-4b9e-a26c-686f93cabca9] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004299401s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7c5pj" [fed31c9e-468a-4b59-b8f2-1efd30fa0e42] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004352833s
addons_test.go:331: (dbg) Run:  kubectl --context addons-153447 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-153447 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-153447 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.196929138s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 ip
2024/12/04 19:55:47 [DEBUG] GET http://192.168.39.11:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hhkjg" [05679cc4-5edf-444e-b0ee-4efea7c53df5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008053977s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable inspektor-gadget --alsologtostderr -v=1: (6.103887416s)
--- PASS: TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 22.729953ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-153447 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-153447 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a0d50e87-8738-4466-929a-e63a29382430] Pending
helpers_test.go:344: "task-pv-pod" [a0d50e87-8738-4466-929a-e63a29382430] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a0d50e87-8738-4466-929a-e63a29382430] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003833821s
addons_test.go:511: (dbg) Run:  kubectl --context addons-153447 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-153447 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-153447 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-153447 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-153447 delete pod task-pv-pod: (1.044006208s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-153447 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-153447 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-153447 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a6848f95-59e1-4ef7-bbd8-57eb3849a49c] Pending
helpers_test.go:344: "task-pv-pod-restore" [a6848f95-59e1-4ef7-bbd8-57eb3849a49c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a6848f95-59e1-4ef7-bbd8-57eb3849a49c] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004765954s
addons_test.go:553: (dbg) Run:  kubectl --context addons-153447 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-153447 delete pod task-pv-pod-restore: (1.654790676s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-153447 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-153447 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.851366306s)
--- PASS: TestAddons/parallel/CSI (55.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-153447 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-153447 --alsologtostderr -v=1: (1.339640237s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-vdmwv" [ca556518-8665-46a1-bad1-709ab460e974] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-vdmwv" [ca556518-8665-46a1-bad1-709ab460e974] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-vdmwv" [ca556518-8665-46a1-bad1-709ab460e974] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005033649s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable headlamp --alsologtostderr -v=1: (5.720771696s)
--- PASS: TestAddons/parallel/Headlamp (21.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-f5bdk" [b8856b5c-bf29-4aa9-88de-59f96ba2ab66] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004536536s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-153447 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-153447 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3cd24be6-c16e-4e2e-bee8-bf4304818653] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3cd24be6-c16e-4e2e-bee8-bf4304818653] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3cd24be6-c16e-4e2e-bee8-bf4304818653] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003930509s
addons_test.go:906: (dbg) Run:  kubectl --context addons-153447 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 ssh "cat /opt/local-path-provisioner/pvc-753cdf45-d6df-4271-9413-533dc1761312_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-153447 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-153447 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.483073494s)
--- PASS: TestAddons/parallel/LocalPath (53.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jgz4f" [eae62c73-3a4f-42eb-baac-f18cf9160aea] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003728478s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.261301109s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I1204 19:55:32.532459   17743 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-nltgj" [44771ccd-9ce4-4e7c-9e07-b80679c305ae] Running
I1204 19:55:32.555140   17743 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1204 19:55:32.555165   17743 kapi.go:107] duration metric: took 22.719821ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004948164s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-153447 addons disable yakd --alsologtostderr -v=1: (5.848151836s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestCertOptions (86.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-520332 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-520332 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m24.8306829s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-520332 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-520332 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-520332 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-520332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-520332
--- PASS: TestCertOptions (86.06s)

                                                
                                    
x
+
TestCertExpiration (282.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-994058 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-994058 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m7.457582145s)
E1204 20:57:26.275439   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-994058 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-994058 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (34.262041505s)
helpers_test.go:175: Cleaning up "cert-expiration-994058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-994058
--- PASS: TestCertExpiration (282.71s)

                                                
                                    
x
+
TestForceSystemdFlag (86.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-044975 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-044975 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.515533831s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-044975 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-044975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-044975
--- PASS: TestForceSystemdFlag (86.52s)

                                                
                                    
x
+
TestForceSystemdEnv (90.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-905824 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-905824 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m29.848791925s)
helpers_test.go:175: Cleaning up "force-systemd-env-905824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-905824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-905824: (1.009055999s)
--- PASS: TestForceSystemdEnv (90.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1204 21:00:31.825414   17743 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 21:00:31.825592   17743 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1204 21:00:31.859471   17743 install.go:62] docker-machine-driver-kvm2: exit status 1
W1204 21:00:31.859877   17743 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1204 21:00:31.859944   17743 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1125651839/001/docker-machine-driver-kvm2
I1204 21:00:32.066069   17743 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1125651839/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0005bd530 gz:0xc0005bd538 tar:0xc0005bd4d0 tar.bz2:0xc0005bd4f0 tar.gz:0xc0005bd500 tar.xz:0xc0005bd510 tar.zst:0xc0005bd520 tbz2:0xc0005bd4f0 tgz:0xc0005bd500 txz:0xc0005bd510 tzst:0xc0005bd520 xz:0xc0005bd540 zip:0xc0005bd550 zst:0xc0005bd548] Getters:map[file:0xc001e66720 http:0xc0006cf360 https:0xc0006cf3b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1204 21:00:32.066144   17743 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1125651839/001/docker-machine-driver-kvm2
I1204 21:00:34.169280   17743 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 21:00:34.169414   17743 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1204 21:00:34.204056   17743 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1204 21:00:34.204089   17743 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1204 21:00:34.204160   17743 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1204 21:00:34.204190   17743 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1125651839/002/docker-machine-driver-kvm2
I1204 21:00:34.269305   17743 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1125651839/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0005bd530 gz:0xc0005bd538 tar:0xc0005bd4d0 tar.bz2:0xc0005bd4f0 tar.gz:0xc0005bd500 tar.xz:0xc0005bd510 tar.zst:0xc0005bd520 tbz2:0xc0005bd4f0 tgz:0xc0005bd500 txz:0xc0005bd510 tzst:0xc0005bd520 xz:0xc0005bd540 zip:0xc0005bd550 zst:0xc0005bd548] Getters:map[file:0xc001e676c0 http:0xc0005abc70 https:0xc0005abd60] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1204 21:00:34.269367   17743 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1125651839/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                    
x
+
TestErrorSpam/setup (45.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-535489 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-535489 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-535489 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-535489 --driver=kvm2  --container-runtime=crio: (45.452485077s)
--- PASS: TestErrorSpam/setup (45.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (4.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop: (1.608871121s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop: (1.236189737s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop
E1204 20:04:52.902572   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:52.908959   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:52.920290   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:52.941608   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:52.982921   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:53.064291   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:53.225785   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-535489 --log_dir /tmp/nospam-535489 stop: (1.731071278s)
--- PASS: TestErrorSpam/stop (4.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19985-10581/.minikube/files/etc/test/nested/copy/17743/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1204 20:04:54.189502   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:55.471166   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:04:58.034089   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:05:03.155805   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:05:13.397843   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:05:33.879980   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-763517 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (49.363237128s)
--- PASS: TestFunctional/serial/StartWithProxy (49.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 20:05:43.258748   17743 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --alsologtostderr -v=8
E1204 20:06:14.841976   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-763517 --alsologtostderr -v=8: (53.54370339s)
functional_test.go:663: soft start took 53.544387569s for "functional-763517" cluster.
I1204 20:06:36.802869   17743 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (53.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-763517 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 cache add registry.k8s.io/pause:3.3: (1.13314769s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 cache add registry.k8s.io/pause:latest: (1.046905227s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-763517 /tmp/TestFunctionalserialCacheCmdcacheadd_local164140004/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache add minikube-local-cache-test:functional-763517
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 cache add minikube-local-cache-test:functional-763517: (1.590549279s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache delete minikube-local-cache-test:functional-763517
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-763517
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.875023ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 kubectl -- --context functional-763517 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-763517 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-763517 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.645481987s)
functional_test.go:761: restart took 34.645593633s for "functional-763517" cluster.
I1204 20:07:18.915051   17743 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-763517 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 logs: (1.37518264s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 logs --file /tmp/TestFunctionalserialLogsFileCmd3380762901/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 logs --file /tmp/TestFunctionalserialLogsFileCmd3380762901/001/logs.txt: (1.319856596s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-763517 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-763517
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-763517: exit status 115 (267.575956ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.191:32344 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-763517 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-763517 delete -f testdata/invalidsvc.yaml: (1.140928125s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 config get cpus: exit status 14 (55.02501ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 config get cpus: exit status 14 (54.550969ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-763517 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-763517 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25805: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-763517 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.934332ms)

                                                
                                                
-- stdout --
	* [functional-763517] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:07:29.073498   25688 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:07:29.073588   25688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:07:29.073596   25688 out.go:358] Setting ErrFile to fd 2...
	I1204 20:07:29.073600   25688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:07:29.073752   25688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:07:29.074214   25688 out.go:352] Setting JSON to false
	I1204 20:07:29.075053   25688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2999,"bootTime":1733339850,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:07:29.075139   25688 start.go:139] virtualization: kvm guest
	I1204 20:07:29.076905   25688 out.go:177] * [functional-763517] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 20:07:29.078062   25688 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:07:29.078064   25688 notify.go:220] Checking for updates...
	I1204 20:07:29.080157   25688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:07:29.081256   25688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:07:29.082426   25688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:07:29.083522   25688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:07:29.084665   25688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:07:29.086185   25688 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:07:29.086563   25688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:07:29.086612   25688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:07:29.103175   25688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I1204 20:07:29.103736   25688 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:07:29.104517   25688 main.go:141] libmachine: Using API Version  1
	I1204 20:07:29.104555   25688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:07:29.104857   25688 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:07:29.105045   25688 main.go:141] libmachine: (functional-763517) Calling .DriverName
	I1204 20:07:29.105259   25688 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:07:29.105558   25688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:07:29.105599   25688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:07:29.120390   25688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1204 20:07:29.120928   25688 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:07:29.121442   25688 main.go:141] libmachine: Using API Version  1
	I1204 20:07:29.121461   25688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:07:29.121817   25688 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:07:29.121986   25688 main.go:141] libmachine: (functional-763517) Calling .DriverName
	I1204 20:07:29.157466   25688 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 20:07:29.158752   25688 start.go:297] selected driver: kvm2
	I1204 20:07:29.158765   25688 start.go:901] validating driver "kvm2" against &{Name:functional-763517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-763517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:07:29.158879   25688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:07:29.160829   25688 out.go:201] 
	W1204 20:07:29.161978   25688 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 20:07:29.163062   25688 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-763517 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-763517 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.282174ms)

                                                
                                                
-- stdout --
	* [functional-763517] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:07:28.940558   25634 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:07:28.940670   25634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:07:28.940676   25634 out.go:358] Setting ErrFile to fd 2...
	I1204 20:07:28.940682   25634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:07:28.941079   25634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:07:28.941711   25634 out.go:352] Setting JSON to false
	I1204 20:07:28.942907   25634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2999,"bootTime":1733339850,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 20:07:28.943023   25634 start.go:139] virtualization: kvm guest
	I1204 20:07:28.945443   25634 out.go:177] * [functional-763517] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1204 20:07:28.946856   25634 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 20:07:28.946863   25634 notify.go:220] Checking for updates...
	I1204 20:07:28.948157   25634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 20:07:28.949829   25634 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 20:07:28.950902   25634 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 20:07:28.951971   25634 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 20:07:28.953025   25634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 20:07:28.954596   25634 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:07:28.955185   25634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:07:28.955259   25634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:07:28.977028   25634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I1204 20:07:28.977569   25634 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:07:28.978071   25634 main.go:141] libmachine: Using API Version  1
	I1204 20:07:28.978087   25634 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:07:28.978453   25634 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:07:28.978666   25634 main.go:141] libmachine: (functional-763517) Calling .DriverName
	I1204 20:07:28.978905   25634 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 20:07:28.979305   25634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:07:28.979346   25634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:07:28.994095   25634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1204 20:07:28.994484   25634 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:07:28.994996   25634 main.go:141] libmachine: Using API Version  1
	I1204 20:07:28.995016   25634 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:07:28.995279   25634 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:07:28.995465   25634 main.go:141] libmachine: (functional-763517) Calling .DriverName
	I1204 20:07:29.024191   25634 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1204 20:07:29.025246   25634 start.go:297] selected driver: kvm2
	I1204 20:07:29.025268   25634 start.go:901] validating driver "kvm2" against &{Name:functional-763517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-763517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 20:07:29.025384   25634 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 20:07:29.027220   25634 out.go:201] 
	W1204 20:07:29.028372   25634 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 20:07:29.029446   25634 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-763517 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-763517 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2wh8q" [807a75ac-76cd-42f2-932a-5b3d62395b3b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2wh8q" [807a75ac-76cd-42f2-932a-5b3d62395b3b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003737981s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.191:30651
functional_test.go:1675: http://192.168.50.191:30651: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2wh8q

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.191:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.191:30651
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1da544ec-23e5-4b6c-88f6-e92173194464] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005547832s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-763517 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-763517 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-763517 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-763517 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [27661159-759a-495d-9bc6-0039ab55d9a7] Pending
helpers_test.go:344: "sp-pod" [27661159-759a-495d-9bc6-0039ab55d9a7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [27661159-759a-495d-9bc6-0039ab55d9a7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004087042s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-763517 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-763517 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-763517 delete -f testdata/storage-provisioner/pod.yaml: (2.205358442s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-763517 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b2d5dc1-4d59-4a09-9c0d-7438c608e86d] Pending
helpers_test.go:344: "sp-pod" [4b2d5dc1-4d59-4a09-9c0d-7438c608e86d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b2d5dc1-4d59-4a09-9c0d-7438c608e86d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004565664s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-763517 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh -n functional-763517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cp functional-763517:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd281729736/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh -n functional-763517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh -n functional-763517 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-763517 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-k2grp" [7a501c1b-b15c-4668-985d-8f3821f0e68f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-k2grp" [7a501c1b-b15c-4668-985d-8f3821f0e68f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003520525s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-763517 exec mysql-6cdb49bbb-k2grp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-763517 exec mysql-6cdb49bbb-k2grp -- mysql -ppassword -e "show databases;": exit status 1 (122.312289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 20:08:07.924292   17743 retry.go:31] will retry after 959.915532ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-763517 exec mysql-6cdb49bbb-k2grp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-763517 exec mysql-6cdb49bbb-k2grp -- mysql -ppassword -e "show databases;": exit status 1 (149.793127ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 20:08:09.034800   17743 retry.go:31] will retry after 1.517921519s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-763517 exec mysql-6cdb49bbb-k2grp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17743/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /etc/test/nested/copy/17743/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17743.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /etc/ssl/certs/17743.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17743.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /usr/share/ca-certificates/17743.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/177432.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /etc/ssl/certs/177432.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/177432.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /usr/share/ca-certificates/177432.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-763517 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "sudo systemctl is-active docker": exit status 1 (209.430867ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "sudo systemctl is-active containerd": exit status 1 (187.930357ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-763517 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-763517 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jngbs" [0b50d226-cd2d-415a-a7e4-ac2b61715079] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jngbs" [0b50d226-cd2d-415a-a7e4-ac2b61715079] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.006242896s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "326.756799ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.066008ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "370.191908ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "67.372131ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdany-port2754684627/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733342847628321861" to /tmp/TestFunctionalparallelMountCmdany-port2754684627/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733342847628321861" to /tmp/TestFunctionalparallelMountCmdany-port2754684627/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733342847628321861" to /tmp/TestFunctionalparallelMountCmdany-port2754684627/001/test-1733342847628321861
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.386143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 20:07:27.893053   17743 retry.go:31] will retry after 532.41507ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 20:07 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 20:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 20:07 test-1733342847628321861
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh cat /mount-9p/test-1733342847628321861
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-763517 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4f4792de-908d-45a6-98a5-7774a4cc6d4e] Pending
helpers_test.go:344: "busybox-mount" [4f4792de-908d-45a6-98a5-7774a4cc6d4e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4f4792de-908d-45a6-98a5-7774a4cc6d4e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4f4792de-908d-45a6-98a5-7774a4cc6d4e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003486233s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-763517 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdany-port2754684627/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdspecific-port2536049322/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.773979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 20:07:36.610084   17743 retry.go:31] will retry after 567.060438ms: exit status 1
E1204 20:07:36.764279   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdspecific-port2536049322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "sudo umount -f /mount-9p": exit status 1 (239.945013ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-763517 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdspecific-port2536049322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 service list: (1.007911961s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T" /mount1: exit status 1 (298.357779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 20:07:38.602053   17743 retry.go:31] will retry after 516.623577ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-763517 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-763517 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2713645129/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service list -o json
functional_test.go:1494: Took "519.276318ms" to run "out/minikube-linux-amd64 -p functional-763517 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.191:32283
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.191:32283
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-763517 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-763517
localhost/kicbase/echo-server:functional-763517
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-763517 image ls --format short --alsologtostderr:
I1204 20:07:51.232472   27494 out.go:345] Setting OutFile to fd 1 ...
I1204 20:07:51.233077   27494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.233133   27494 out.go:358] Setting ErrFile to fd 2...
I1204 20:07:51.233148   27494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.233656   27494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
I1204 20:07:51.235120   27494 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.235284   27494 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.235881   27494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.235954   27494 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.251212   27494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
I1204 20:07:51.251702   27494 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.252289   27494 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.252311   27494 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.252704   27494 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.253011   27494 main.go:141] libmachine: (functional-763517) Calling .GetState
I1204 20:07:51.255829   27494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.255880   27494 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.272354   27494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
I1204 20:07:51.272785   27494 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.273368   27494 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.273389   27494 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.273699   27494 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.274006   27494 main.go:141] libmachine: (functional-763517) Calling .DriverName
I1204 20:07:51.274211   27494 ssh_runner.go:195] Run: systemctl --version
I1204 20:07:51.274246   27494 main.go:141] libmachine: (functional-763517) Calling .GetSSHHostname
I1204 20:07:51.277628   27494 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.278081   27494 main.go:141] libmachine: (functional-763517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1d:5f", ip: ""} in network mk-functional-763517: {Iface:virbr1 ExpiryTime:2024-12-04 21:05:08 +0000 UTC Type:0 Mac:52:54:00:cd:1d:5f Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:functional-763517 Clientid:01:52:54:00:cd:1d:5f}
I1204 20:07:51.278104   27494 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined IP address 192.168.50.191 and MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.278362   27494 main.go:141] libmachine: (functional-763517) Calling .GetSSHPort
I1204 20:07:51.278506   27494 main.go:141] libmachine: (functional-763517) Calling .GetSSHKeyPath
I1204 20:07:51.278654   27494 main.go:141] libmachine: (functional-763517) Calling .GetSSHUsername
I1204 20:07:51.278795   27494 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/functional-763517/id_rsa Username:docker}
I1204 20:07:51.392216   27494 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 20:07:51.646241   27494 main.go:141] libmachine: Making call to close driver server
I1204 20:07:51.646259   27494 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:51.646583   27494 main.go:141] libmachine: (functional-763517) DBG | Closing plugin on server side
I1204 20:07:51.646640   27494 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:51.646658   27494 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:07:51.646677   27494 main.go:141] libmachine: Making call to close driver server
I1204 20:07:51.646693   27494 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:51.646933   27494 main.go:141] libmachine: (functional-763517) DBG | Closing plugin on server side
I1204 20:07:51.646956   27494 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:51.646980   27494 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-763517 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| localhost/kicbase/echo-server           | functional-763517  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-763517  | 883307cd565d4 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-763517 image ls --format table --alsologtostderr:
I1204 20:07:51.715740   27543 out.go:345] Setting OutFile to fd 1 ...
I1204 20:07:51.715877   27543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.715893   27543 out.go:358] Setting ErrFile to fd 2...
I1204 20:07:51.715900   27543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.716206   27543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
I1204 20:07:51.717040   27543 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.717199   27543 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.717735   27543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.717811   27543 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.735328   27543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
I1204 20:07:51.735841   27543 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.736520   27543 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.736551   27543 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.736940   27543 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.737135   27543 main.go:141] libmachine: (functional-763517) Calling .GetState
I1204 20:07:51.739157   27543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.739204   27543 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.754542   27543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
I1204 20:07:51.754974   27543 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.755492   27543 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.755529   27543 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.755936   27543 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.756161   27543 main.go:141] libmachine: (functional-763517) Calling .DriverName
I1204 20:07:51.756376   27543 ssh_runner.go:195] Run: systemctl --version
I1204 20:07:51.756403   27543 main.go:141] libmachine: (functional-763517) Calling .GetSSHHostname
I1204 20:07:51.759449   27543 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.759841   27543 main.go:141] libmachine: (functional-763517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1d:5f", ip: ""} in network mk-functional-763517: {Iface:virbr1 ExpiryTime:2024-12-04 21:05:08 +0000 UTC Type:0 Mac:52:54:00:cd:1d:5f Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:functional-763517 Clientid:01:52:54:00:cd:1d:5f}
I1204 20:07:51.759911   27543 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined IP address 192.168.50.191 and MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.760010   27543 main.go:141] libmachine: (functional-763517) Calling .GetSSHPort
I1204 20:07:51.760233   27543 main.go:141] libmachine: (functional-763517) Calling .GetSSHKeyPath
I1204 20:07:51.760404   27543 main.go:141] libmachine: (functional-763517) Calling .GetSSHUsername
I1204 20:07:51.760553   27543 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/functional-763517/id_rsa Username:docker}
I1204 20:07:51.883991   27543 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 20:07:52.218404   27543 main.go:141] libmachine: Making call to close driver server
I1204 20:07:52.218425   27543 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:52.218674   27543 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:52.218690   27543 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:07:52.218705   27543 main.go:141] libmachine: Making call to close driver server
I1204 20:07:52.218713   27543 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:52.218724   27543 main.go:141] libmachine: (functional-763517) DBG | Closing plugin on server side
I1204 20:07:52.218946   27543 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:52.218967   27543 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-763517 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2
"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"883307cd565d49d538d60f374371a9f9be78a93987035fbb1e912b67349aa249","repoDigests":["localhost/minikube-local-cache-test@sha256:39acce4bb0bfd893b61b2a3788617b36b63c736d38745dd1e90a666f7eacdf2d"],"repoTags":["localhost/minikube-local
-cache-test:functional-763517"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metri
cs-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags"
:["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-763517"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pa
use@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-763517 image ls --format json --alsologtostderr:
I1204 20:07:52.275139   27617 out.go:345] Setting OutFile to fd 1 ...
I1204 20:07:52.275245   27617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:52.275255   27617 out.go:358] Setting ErrFile to fd 2...
I1204 20:07:52.275260   27617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:52.275495   27617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
I1204 20:07:52.276026   27617 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:52.276120   27617 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:52.276465   27617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:52.276504   27617 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:52.291039   27617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
I1204 20:07:52.291605   27617 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:52.292185   27617 main.go:141] libmachine: Using API Version  1
I1204 20:07:52.292211   27617 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:52.292600   27617 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:52.292820   27617 main.go:141] libmachine: (functional-763517) Calling .GetState
I1204 20:07:52.295009   27617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:52.295061   27617 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:52.312529   27617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
I1204 20:07:52.312998   27617 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:52.313566   27617 main.go:141] libmachine: Using API Version  1
I1204 20:07:52.313595   27617 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:52.313924   27617 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:52.314121   27617 main.go:141] libmachine: (functional-763517) Calling .DriverName
I1204 20:07:52.314300   27617 ssh_runner.go:195] Run: systemctl --version
I1204 20:07:52.314328   27617 main.go:141] libmachine: (functional-763517) Calling .GetSSHHostname
I1204 20:07:52.317076   27617 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:52.317414   27617 main.go:141] libmachine: (functional-763517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1d:5f", ip: ""} in network mk-functional-763517: {Iface:virbr1 ExpiryTime:2024-12-04 21:05:08 +0000 UTC Type:0 Mac:52:54:00:cd:1d:5f Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:functional-763517 Clientid:01:52:54:00:cd:1d:5f}
I1204 20:07:52.317455   27617 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined IP address 192.168.50.191 and MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:52.317674   27617 main.go:141] libmachine: (functional-763517) Calling .GetSSHPort
I1204 20:07:52.317857   27617 main.go:141] libmachine: (functional-763517) Calling .GetSSHKeyPath
I1204 20:07:52.317998   27617 main.go:141] libmachine: (functional-763517) Calling .GetSSHUsername
I1204 20:07:52.318159   27617 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/functional-763517/id_rsa Username:docker}
I1204 20:07:52.450026   27617 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 20:07:52.647172   27617 main.go:141] libmachine: Making call to close driver server
I1204 20:07:52.647190   27617 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:52.647497   27617 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:52.647515   27617 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:07:52.647528   27617 main.go:141] libmachine: Making call to close driver server
I1204 20:07:52.647538   27617 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:52.647774   27617 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:52.647791   27617 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-763517 image ls --format yaml --alsologtostderr:
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 883307cd565d49d538d60f374371a9f9be78a93987035fbb1e912b67349aa249
repoDigests:
- localhost/minikube-local-cache-test@sha256:39acce4bb0bfd893b61b2a3788617b36b63c736d38745dd1e90a666f7eacdf2d
repoTags:
- localhost/minikube-local-cache-test:functional-763517
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-763517
size: "4943877"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-763517 image ls --format yaml --alsologtostderr:
I1204 20:07:51.233507   27495 out.go:345] Setting OutFile to fd 1 ...
I1204 20:07:51.233805   27495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.233820   27495 out.go:358] Setting ErrFile to fd 2...
I1204 20:07:51.233826   27495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.234080   27495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
I1204 20:07:51.234899   27495 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.235053   27495 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.235625   27495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.235678   27495 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.251496   27495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
I1204 20:07:51.252112   27495 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.252873   27495 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.252895   27495 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.253296   27495 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.253506   27495 main.go:141] libmachine: (functional-763517) Calling .GetState
I1204 20:07:51.255867   27495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.255912   27495 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.270687   27495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
I1204 20:07:51.271211   27495 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.271911   27495 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.271973   27495 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.272439   27495 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.272656   27495 main.go:141] libmachine: (functional-763517) Calling .DriverName
I1204 20:07:51.272864   27495 ssh_runner.go:195] Run: systemctl --version
I1204 20:07:51.272920   27495 main.go:141] libmachine: (functional-763517) Calling .GetSSHHostname
I1204 20:07:51.275944   27495 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.276431   27495 main.go:141] libmachine: (functional-763517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1d:5f", ip: ""} in network mk-functional-763517: {Iface:virbr1 ExpiryTime:2024-12-04 21:05:08 +0000 UTC Type:0 Mac:52:54:00:cd:1d:5f Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:functional-763517 Clientid:01:52:54:00:cd:1d:5f}
I1204 20:07:51.276507   27495 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined IP address 192.168.50.191 and MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.276549   27495 main.go:141] libmachine: (functional-763517) Calling .GetSSHPort
I1204 20:07:51.276724   27495 main.go:141] libmachine: (functional-763517) Calling .GetSSHKeyPath
I1204 20:07:51.276881   27495 main.go:141] libmachine: (functional-763517) Calling .GetSSHUsername
I1204 20:07:51.277051   27495 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/functional-763517/id_rsa Username:docker}
I1204 20:07:51.400317   27495 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 20:07:51.648742   27495 main.go:141] libmachine: Making call to close driver server
I1204 20:07:51.648760   27495 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:51.648998   27495 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:51.649014   27495 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:07:51.649023   27495 main.go:141] libmachine: Making call to close driver server
I1204 20:07:51.649031   27495 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:07:51.649260   27495 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:07:51.649290   27495 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:07:51.649396   27495 main.go:141] libmachine: (functional-763517) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-763517 ssh pgrep buildkitd: exit status 1 (233.037182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image build -t localhost/my-image:functional-763517 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 image build -t localhost/my-image:functional-763517 testdata/build --alsologtostderr: (9.296237606s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-763517 image build -t localhost/my-image:functional-763517 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fa1b5245dd6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-763517
--> 1e9a1783ef6
Successfully tagged localhost/my-image:functional-763517
1e9a1783ef64dfcfc40d9437026c34bb610fe8d3e379d597c8f9f038551fd0c2
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-763517 image build -t localhost/my-image:functional-763517 testdata/build --alsologtostderr:
I1204 20:07:51.941042   27593 out.go:345] Setting OutFile to fd 1 ...
I1204 20:07:51.941399   27593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.941413   27593 out.go:358] Setting ErrFile to fd 2...
I1204 20:07:51.941419   27593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 20:07:51.941726   27593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
I1204 20:07:51.942559   27593 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.943067   27593 config.go:182] Loaded profile config "functional-763517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 20:07:51.943439   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.943481   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.958172   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
I1204 20:07:51.958652   27593 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.959241   27593 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.959270   27593 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.959582   27593 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.959775   27593 main.go:141] libmachine: (functional-763517) Calling .GetState
I1204 20:07:51.961397   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 20:07:51.961439   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 20:07:51.976225   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
I1204 20:07:51.976668   27593 main.go:141] libmachine: () Calling .GetVersion
I1204 20:07:51.977172   27593 main.go:141] libmachine: Using API Version  1
I1204 20:07:51.977194   27593 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 20:07:51.977481   27593 main.go:141] libmachine: () Calling .GetMachineName
I1204 20:07:51.977666   27593 main.go:141] libmachine: (functional-763517) Calling .DriverName
I1204 20:07:51.977855   27593 ssh_runner.go:195] Run: systemctl --version
I1204 20:07:51.977878   27593 main.go:141] libmachine: (functional-763517) Calling .GetSSHHostname
I1204 20:07:51.980405   27593 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.980789   27593 main.go:141] libmachine: (functional-763517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1d:5f", ip: ""} in network mk-functional-763517: {Iface:virbr1 ExpiryTime:2024-12-04 21:05:08 +0000 UTC Type:0 Mac:52:54:00:cd:1d:5f Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:functional-763517 Clientid:01:52:54:00:cd:1d:5f}
I1204 20:07:51.980813   27593 main.go:141] libmachine: (functional-763517) DBG | domain functional-763517 has defined IP address 192.168.50.191 and MAC address 52:54:00:cd:1d:5f in network mk-functional-763517
I1204 20:07:51.980966   27593 main.go:141] libmachine: (functional-763517) Calling .GetSSHPort
I1204 20:07:51.981112   27593 main.go:141] libmachine: (functional-763517) Calling .GetSSHKeyPath
I1204 20:07:51.981223   27593 main.go:141] libmachine: (functional-763517) Calling .GetSSHUsername
I1204 20:07:51.981353   27593 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/functional-763517/id_rsa Username:docker}
I1204 20:07:52.097256   27593 build_images.go:161] Building image from path: /tmp/build.2517213026.tar
I1204 20:07:52.097343   27593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 20:07:52.129287   27593 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2517213026.tar
I1204 20:07:52.137252   27593 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2517213026.tar: stat -c "%s %y" /var/lib/minikube/build/build.2517213026.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2517213026.tar': No such file or directory
I1204 20:07:52.137285   27593 ssh_runner.go:362] scp /tmp/build.2517213026.tar --> /var/lib/minikube/build/build.2517213026.tar (3072 bytes)
I1204 20:07:52.165941   27593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2517213026
I1204 20:07:52.175970   27593 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2517213026 -xf /var/lib/minikube/build/build.2517213026.tar
I1204 20:07:52.190755   27593 crio.go:315] Building image: /var/lib/minikube/build/build.2517213026
I1204 20:07:52.190824   27593 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-763517 /var/lib/minikube/build/build.2517213026 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1204 20:08:01.120322   27593 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-763517 /var/lib/minikube/build/build.2517213026 --cgroup-manager=cgroupfs: (8.929473546s)
I1204 20:08:01.120394   27593 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2517213026
I1204 20:08:01.140531   27593 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2517213026.tar
I1204 20:08:01.153046   27593 build_images.go:217] Built localhost/my-image:functional-763517 from /tmp/build.2517213026.tar
I1204 20:08:01.153078   27593 build_images.go:133] succeeded building to: functional-763517
I1204 20:08:01.153083   27593 build_images.go:134] failed building to: 
I1204 20:08:01.153108   27593 main.go:141] libmachine: Making call to close driver server
I1204 20:08:01.153120   27593 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:08:01.153473   27593 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:08:01.153490   27593 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 20:08:01.153498   27593 main.go:141] libmachine: Making call to close driver server
I1204 20:08:01.153505   27593 main.go:141] libmachine: (functional-763517) Calling .Close
I1204 20:08:01.153731   27593 main.go:141] libmachine: Successfully made call to close driver server
I1204 20:08:01.153752   27593 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.703272942s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-763517
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image load --daemon kicbase/echo-server:functional-763517 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 image load --daemon kicbase/echo-server:functional-763517 --alsologtostderr: (1.091396008s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image load --daemon kicbase/echo-server:functional-763517 --alsologtostderr
2024/12/04 20:07:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-763517
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image load --daemon kicbase/echo-server:functional-763517 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-763517 image load --daemon kicbase/echo-server:functional-763517 --alsologtostderr: (2.426427122s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image save kicbase/echo-server:functional-763517 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image rm kicbase/echo-server:functional-763517 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-763517
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-763517 image save --daemon kicbase/echo-server:functional-763517 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-763517
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-763517
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-763517
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-763517
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-739930 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1204 20:09:52.903110   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:10:20.606441   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-739930 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.662233434s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-739930 -- rollout status deployment/busybox: (4.679439272s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-9pz7p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-gg7dr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-kx56q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-9pz7p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-gg7dr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-kx56q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-9pz7p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-gg7dr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-kx56q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-9pz7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-9pz7p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-gg7dr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-gg7dr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-kx56q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-739930 -- exec busybox-7dff88458-kx56q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-739930 -v=7 --alsologtostderr
E1204 20:12:26.275523   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.281797   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.293218   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.314574   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.356011   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.437500   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.599676   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:26.921377   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:27.562705   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:12:28.844395   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-739930 -v=7 --alsologtostderr: (54.916427822s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
E1204 20:12:31.406567   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-739930 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp testdata/cp-test.txt ha-739930:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930:/home/docker/cp-test.txt ha-739930-m02:/home/docker/cp-test_ha-739930_ha-739930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test_ha-739930_ha-739930-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930:/home/docker/cp-test.txt ha-739930-m03:/home/docker/cp-test_ha-739930_ha-739930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test_ha-739930_ha-739930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930:/home/docker/cp-test.txt ha-739930-m04:/home/docker/cp-test_ha-739930_ha-739930-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test_ha-739930_ha-739930-m04.txt"
E1204 20:12:36.528350   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp testdata/cp-test.txt ha-739930-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m02:/home/docker/cp-test.txt ha-739930:/home/docker/cp-test_ha-739930-m02_ha-739930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test_ha-739930-m02_ha-739930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m02:/home/docker/cp-test.txt ha-739930-m03:/home/docker/cp-test_ha-739930-m02_ha-739930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test_ha-739930-m02_ha-739930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m02:/home/docker/cp-test.txt ha-739930-m04:/home/docker/cp-test_ha-739930-m02_ha-739930-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test_ha-739930-m02_ha-739930-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp testdata/cp-test.txt ha-739930-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt ha-739930:/home/docker/cp-test_ha-739930-m03_ha-739930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test_ha-739930-m03_ha-739930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt ha-739930-m02:/home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test_ha-739930-m03_ha-739930-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m03:/home/docker/cp-test.txt ha-739930-m04:/home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test_ha-739930-m03_ha-739930-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp testdata/cp-test.txt ha-739930-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1344431772/001/cp-test_ha-739930-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt ha-739930:/home/docker/cp-test_ha-739930-m04_ha-739930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930 "sudo cat /home/docker/cp-test_ha-739930-m04_ha-739930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt ha-739930-m02:/home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m02 "sudo cat /home/docker/cp-test_ha-739930-m04_ha-739930-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 cp ha-739930-m04:/home/docker/cp-test.txt ha-739930-m03:/home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 ssh -n ha-739930-m03 "sudo cat /home/docker/cp-test_ha-739930-m04_ha-739930-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 node delete m03 -v=7 --alsologtostderr
E1204 20:22:26.275578   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-739930 node delete m03 -v=7 --alsologtostderr: (15.783146636s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (326.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-739930 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1204 20:27:26.278415   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:28:49.339433   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 20:29:52.902435   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-739930 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m25.356422328s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (326.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-739930 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-739930 --control-plane -v=7 --alsologtostderr: (1m23.842690485s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-739930 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-237474 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1204 20:32:26.275161   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-237474 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.098207234s)
--- PASS: TestJSONOutput/start/Command (56.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-237474 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-237474 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-237474 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-237474 --output=json --user=testUser: (6.661959131s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-621272 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-621272 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.409839ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed9cf5ec-02fe-4fff-8e42-19fbd9e55938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-621272] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbc99f04-5403-4ba2-8c19-83c2b3aa2ba9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19985"}}
	{"specversion":"1.0","id":"bc6ebd0b-6515-495b-a9e8-cbeda1af3976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d4d40b0-9b94-4e25-abc3-73c11f29d15f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig"}}
	{"specversion":"1.0","id":"9360cba3-19dc-43d7-93db-2118d18eeb35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube"}}
	{"specversion":"1.0","id":"5584de61-b2ed-4c2d-aba9-842381658609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bf67ffe8-092b-4a5a-93f0-60a86beb989d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39d2b5d5-8f10-4230-9459-c0272e441340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-621272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-621272
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (91.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-455308 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-455308 --driver=kvm2  --container-runtime=crio: (45.633483828s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-466269 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-466269 --driver=kvm2  --container-runtime=crio: (43.34799364s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-455308
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-466269
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-466269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-466269
helpers_test.go:175: Cleaning up "first-455308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-455308
--- PASS: TestMinikubeProfile (91.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-193584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1204 20:34:52.904100   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-193584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.346776584s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-193584 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-193584 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-210510 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-210510 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.157551367s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-193584 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-210510
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-210510: (1.279547749s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-210510
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-210510: (20.128051112s)
--- PASS: TestMountStart/serial/RestartStopped (21.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-210510 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-980367 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1204 20:37:26.275268   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-980367 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.74133113s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-980367 -- rollout status deployment/busybox: (5.138595113s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-hspqv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-w852h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-hspqv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-w852h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-hspqv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-w852h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-hspqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-hspqv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-w852h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-980367 -- exec busybox-7dff88458-w852h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-980367 -v 3 --alsologtostderr
E1204 20:37:55.970265   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-980367 -v 3 --alsologtostderr: (46.71345759s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-980367 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp testdata/cp-test.txt multinode-980367:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367:/home/docker/cp-test.txt multinode-980367-m02:/home/docker/cp-test_multinode-980367_multinode-980367-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test_multinode-980367_multinode-980367-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367:/home/docker/cp-test.txt multinode-980367-m03:/home/docker/cp-test_multinode-980367_multinode-980367-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test_multinode-980367_multinode-980367-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp testdata/cp-test.txt multinode-980367-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt multinode-980367:/home/docker/cp-test_multinode-980367-m02_multinode-980367.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test_multinode-980367-m02_multinode-980367.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m02:/home/docker/cp-test.txt multinode-980367-m03:/home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test_multinode-980367-m02_multinode-980367-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp testdata/cp-test.txt multinode-980367-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile171462700/001/cp-test_multinode-980367-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt multinode-980367:/home/docker/cp-test_multinode-980367-m03_multinode-980367.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367 "sudo cat /home/docker/cp-test_multinode-980367-m03_multinode-980367.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 cp multinode-980367-m03:/home/docker/cp-test.txt multinode-980367-m02:/home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 ssh -n multinode-980367-m02 "sudo cat /home/docker/cp-test_multinode-980367-m03_multinode-980367-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 node stop m03: (1.393307022s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-980367 status: exit status 7 (405.220395ms)

                                                
                                                
-- stdout --
	multinode-980367
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-980367-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-980367-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr: exit status 7 (400.58758ms)

                                                
                                                
-- stdout --
	multinode-980367
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-980367-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-980367-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 20:38:49.924574   45211 out.go:345] Setting OutFile to fd 1 ...
	I1204 20:38:49.924814   45211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:38:49.924824   45211 out.go:358] Setting ErrFile to fd 2...
	I1204 20:38:49.924828   45211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 20:38:49.924977   45211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 20:38:49.925114   45211 out.go:352] Setting JSON to false
	I1204 20:38:49.925137   45211 mustload.go:65] Loading cluster: multinode-980367
	I1204 20:38:49.925558   45211 notify.go:220] Checking for updates...
	I1204 20:38:49.926768   45211 config.go:182] Loaded profile config "multinode-980367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 20:38:49.926792   45211 status.go:174] checking status of multinode-980367 ...
	I1204 20:38:49.927185   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:49.927235   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:49.942306   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I1204 20:38:49.942692   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:49.943175   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:49.943193   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:49.943646   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:49.943847   45211 main.go:141] libmachine: (multinode-980367) Calling .GetState
	I1204 20:38:49.945210   45211 status.go:371] multinode-980367 host status = "Running" (err=<nil>)
	I1204 20:38:49.945223   45211 host.go:66] Checking if "multinode-980367" exists ...
	I1204 20:38:49.945490   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:49.945519   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:49.959588   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I1204 20:38:49.960014   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:49.960444   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:49.960462   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:49.960737   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:49.960893   45211 main.go:141] libmachine: (multinode-980367) Calling .GetIP
	I1204 20:38:49.963569   45211 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:38:49.963971   45211 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:38:49.964005   45211 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:38:49.964131   45211 host.go:66] Checking if "multinode-980367" exists ...
	I1204 20:38:49.964385   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:49.964419   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:49.977927   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I1204 20:38:49.978369   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:49.978777   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:49.978798   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:49.979130   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:49.979318   45211 main.go:141] libmachine: (multinode-980367) Calling .DriverName
	I1204 20:38:49.979540   45211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 20:38:49.979564   45211 main.go:141] libmachine: (multinode-980367) Calling .GetSSHHostname
	I1204 20:38:49.982210   45211 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:38:49.982697   45211 main.go:141] libmachine: (multinode-980367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:9b:dc", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:36:04 +0000 UTC Type:0 Mac:52:54:00:b6:9b:dc Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-980367 Clientid:01:52:54:00:b6:9b:dc}
	I1204 20:38:49.982723   45211 main.go:141] libmachine: (multinode-980367) DBG | domain multinode-980367 has defined IP address 192.168.39.127 and MAC address 52:54:00:b6:9b:dc in network mk-multinode-980367
	I1204 20:38:49.982877   45211 main.go:141] libmachine: (multinode-980367) Calling .GetSSHPort
	I1204 20:38:49.983045   45211 main.go:141] libmachine: (multinode-980367) Calling .GetSSHKeyPath
	I1204 20:38:49.983207   45211 main.go:141] libmachine: (multinode-980367) Calling .GetSSHUsername
	I1204 20:38:49.983516   45211 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367/id_rsa Username:docker}
	I1204 20:38:50.058505   45211 ssh_runner.go:195] Run: systemctl --version
	I1204 20:38:50.064595   45211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:38:50.079296   45211 kubeconfig.go:125] found "multinode-980367" server: "https://192.168.39.127:8443"
	I1204 20:38:50.079333   45211 api_server.go:166] Checking apiserver status ...
	I1204 20:38:50.079391   45211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 20:38:50.093082   45211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup
	W1204 20:38:50.102964   45211 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 20:38:50.103051   45211 ssh_runner.go:195] Run: ls
	I1204 20:38:50.107248   45211 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1204 20:38:50.111596   45211 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I1204 20:38:50.111620   45211 status.go:463] multinode-980367 apiserver status = Running (err=<nil>)
	I1204 20:38:50.111632   45211 status.go:176] multinode-980367 status: &{Name:multinode-980367 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 20:38:50.111662   45211 status.go:174] checking status of multinode-980367-m02 ...
	I1204 20:38:50.112053   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:50.112096   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:50.127570   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I1204 20:38:50.128021   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:50.128531   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:50.128552   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:50.128857   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:50.129011   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetState
	I1204 20:38:50.130607   45211 status.go:371] multinode-980367-m02 host status = "Running" (err=<nil>)
	I1204 20:38:50.130643   45211 host.go:66] Checking if "multinode-980367-m02" exists ...
	I1204 20:38:50.130907   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:50.130943   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:50.146370   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I1204 20:38:50.146737   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:50.147103   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:50.147126   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:50.147467   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:50.147714   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetIP
	I1204 20:38:50.150505   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | domain multinode-980367-m02 has defined MAC address 52:54:00:9c:62:0a in network mk-multinode-980367
	I1204 20:38:50.150919   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:62:0a", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:37:08 +0000 UTC Type:0 Mac:52:54:00:9c:62:0a Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-980367-m02 Clientid:01:52:54:00:9c:62:0a}
	I1204 20:38:50.150957   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | domain multinode-980367-m02 has defined IP address 192.168.39.76 and MAC address 52:54:00:9c:62:0a in network mk-multinode-980367
	I1204 20:38:50.151124   45211 host.go:66] Checking if "multinode-980367-m02" exists ...
	I1204 20:38:50.151477   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:50.151515   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:50.166601   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I1204 20:38:50.166963   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:50.167418   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:50.167442   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:50.167736   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:50.167910   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .DriverName
	I1204 20:38:50.168061   45211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 20:38:50.168094   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetSSHHostname
	I1204 20:38:50.170768   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | domain multinode-980367-m02 has defined MAC address 52:54:00:9c:62:0a in network mk-multinode-980367
	I1204 20:38:50.171162   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:62:0a", ip: ""} in network mk-multinode-980367: {Iface:virbr1 ExpiryTime:2024-12-04 21:37:08 +0000 UTC Type:0 Mac:52:54:00:9c:62:0a Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-980367-m02 Clientid:01:52:54:00:9c:62:0a}
	I1204 20:38:50.171184   45211 main.go:141] libmachine: (multinode-980367-m02) DBG | domain multinode-980367-m02 has defined IP address 192.168.39.76 and MAC address 52:54:00:9c:62:0a in network mk-multinode-980367
	I1204 20:38:50.171354   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetSSHPort
	I1204 20:38:50.171548   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetSSHKeyPath
	I1204 20:38:50.171700   45211 main.go:141] libmachine: (multinode-980367-m02) Calling .GetSSHUsername
	I1204 20:38:50.171817   45211 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19985-10581/.minikube/machines/multinode-980367-m02/id_rsa Username:docker}
	I1204 20:38:50.250333   45211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 20:38:50.263979   45211 status.go:176] multinode-980367-m02 status: &{Name:multinode-980367-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1204 20:38:50.264017   45211 status.go:174] checking status of multinode-980367-m03 ...
	I1204 20:38:50.264327   45211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 20:38:50.264362   45211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 20:38:50.279057   45211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I1204 20:38:50.279557   45211 main.go:141] libmachine: () Calling .GetVersion
	I1204 20:38:50.280030   45211 main.go:141] libmachine: Using API Version  1
	I1204 20:38:50.280052   45211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 20:38:50.280403   45211 main.go:141] libmachine: () Calling .GetMachineName
	I1204 20:38:50.280580   45211 main.go:141] libmachine: (multinode-980367-m03) Calling .GetState
	I1204 20:38:50.282175   45211 status.go:371] multinode-980367-m03 host status = "Stopped" (err=<nil>)
	I1204 20:38:50.282188   45211 status.go:384] host is not running, skipping remaining checks
	I1204 20:38:50.282193   45211 status.go:176] multinode-980367-m03 status: &{Name:multinode-980367-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 node start m03 -v=7 --alsologtostderr: (37.564849883s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-980367 node delete m03: (1.657006076s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-980367 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1204 20:49:52.903266   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-980367 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.245547774s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-980367 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-980367
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-980367-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-980367-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.464857ms)

                                                
                                                
-- stdout --
	* [multinode-980367-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-980367-m02' is duplicated with machine name 'multinode-980367-m02' in profile 'multinode-980367'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-980367-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-980367-m03 --driver=kvm2  --container-runtime=crio: (43.404157547s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-980367
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-980367: exit status 80 (203.856633ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-980367 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-980367-m03 already exists in multinode-980367-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-980367-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.51s)

                                                
                                    
x
+
TestScheduledStopUnix (112.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-859795 --memory=2048 --driver=kvm2  --container-runtime=crio
E1204 20:54:35.971808   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-859795 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.691434613s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-859795 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-859795 -n scheduled-stop-859795
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-859795 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1204 20:54:46.089051   17743 retry.go:31] will retry after 117.961µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.090204   17743 retry.go:31] will retry after 160.559µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.091301   17743 retry.go:31] will retry after 245.176µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.092406   17743 retry.go:31] will retry after 247.447µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.093562   17743 retry.go:31] will retry after 700.918µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.094697   17743 retry.go:31] will retry after 795.649µs: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.095819   17743 retry.go:31] will retry after 1.516836ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.098043   17743 retry.go:31] will retry after 1.825502ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.100280   17743 retry.go:31] will retry after 1.861906ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.102519   17743 retry.go:31] will retry after 3.860598ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.106726   17743 retry.go:31] will retry after 4.55119ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.111948   17743 retry.go:31] will retry after 11.861499ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.124189   17743 retry.go:31] will retry after 14.293796ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.139425   17743 retry.go:31] will retry after 17.733101ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
I1204 20:54:46.157704   17743 retry.go:31] will retry after 29.023063ms: open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/scheduled-stop-859795/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-859795 --cancel-scheduled
E1204 20:54:52.903434   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-859795 -n scheduled-stop-859795
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-859795
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-859795 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-859795
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-859795: exit status 7 (69.304405ms)

                                                
                                                
-- stdout --
	scheduled-stop-859795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-859795 -n scheduled-stop-859795
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-859795 -n scheduled-stop-859795: exit status 7 (62.323244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-859795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-859795
--- PASS: TestScheduledStopUnix (112.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2901520719 start -p running-upgrade-033002 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2901520719 start -p running-upgrade-033002 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m17.755523645s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-033002 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-033002 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.821871788s)
helpers_test.go:175: Cleaning up "running-upgrade-033002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-033002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-033002: (1.173508197s)
--- PASS: TestRunningBinaryUpgrade (174.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (95.526217ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-863313] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (112.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-863313 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-863313 --driver=kvm2  --container-runtime=crio: (1m52.14239752s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-863313 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (112.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.950706344s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-863313 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-863313 status -o json: exit status 2 (238.152754ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-863313","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-863313
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-863313: (1.057400468s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-863313 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.530921687s)
--- PASS: TestNoKubernetes/serial/Start (46.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-863313 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-863313 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.427784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-863313
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-863313: (1.271243036s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-863313 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-863313 --driver=kvm2  --container-runtime=crio: (25.224195966s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-863313 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-863313 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.290904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.698331616 start -p stopped-upgrade-553421 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1204 20:59:52.902379   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.698331616 start -p stopped-upgrade-553421 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (47.679177395s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.698331616 -p stopped-upgrade-553421 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.698331616 -p stopped-upgrade-553421 stop: (1.514232984s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-553421 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-553421 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.639442264s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-272234 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-272234 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.622924ms)

                                                
                                                
-- stdout --
	* [false-272234] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 21:00:25.563812   56529 out.go:345] Setting OutFile to fd 1 ...
	I1204 21:00:25.563933   56529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:00:25.563944   56529 out.go:358] Setting ErrFile to fd 2...
	I1204 21:00:25.563950   56529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 21:00:25.564124   56529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19985-10581/.minikube/bin
	I1204 21:00:25.564638   56529 out.go:352] Setting JSON to false
	I1204 21:00:25.565571   56529 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6176,"bootTime":1733339850,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 21:00:25.565670   56529 start.go:139] virtualization: kvm guest
	I1204 21:00:25.567546   56529 out.go:177] * [false-272234] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 21:00:25.568783   56529 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 21:00:25.568804   56529 notify.go:220] Checking for updates...
	I1204 21:00:25.571095   56529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 21:00:25.572180   56529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19985-10581/kubeconfig
	I1204 21:00:25.573530   56529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19985-10581/.minikube
	I1204 21:00:25.574740   56529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 21:00:25.575954   56529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 21:00:25.577676   56529 config.go:182] Loaded profile config "cert-expiration-994058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 21:00:25.577813   56529 config.go:182] Loaded profile config "kubernetes-upgrade-697588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1204 21:00:25.577937   56529 config.go:182] Loaded profile config "stopped-upgrade-553421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1204 21:00:25.578087   56529 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 21:00:25.613395   56529 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 21:00:25.614404   56529 start.go:297] selected driver: kvm2
	I1204 21:00:25.614425   56529 start.go:901] validating driver "kvm2" against <nil>
	I1204 21:00:25.614436   56529 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 21:00:25.616176   56529 out.go:201] 
	W1204 21:00:25.617290   56529 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1204 21:00:25.618319   56529 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-272234 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.161:8443
name: cert-expiration-994058
contexts:
- context:
cluster: cert-expiration-994058
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-994058
name: cert-expiration-994058
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-994058
user:
client-certificate: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.crt
client-key: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-272234

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272234"

                                                
                                                
----------------------- debugLogs end: false-272234 [took: 2.744775638s] --------------------------------
helpers_test.go:175: Cleaning up "false-272234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-272234
--- PASS: TestNetworkPlugins/group/false (2.99s)

                                                
                                    
x
+
TestPause/serial/Start (75.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998149 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-998149 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m15.501593019s)
--- PASS: TestPause/serial/Start (75.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-553421
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.139349609s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1204 21:02:09.345038   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:02:26.275196   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m22.444447871s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-272234 "pgrep -a kubelet"
I1204 21:02:40.539630   17743 config.go:182] Loaded profile config "auto-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qk8kn" [9c75368b-244a-4186-aed9-22b17126973a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qk8kn" [9c75368b-244a-4186-aed9-22b17126973a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003495025s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-272234 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-272234 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146230553s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 21:03:06.911807   17743 retry.go:31] will retry after 807.676837ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-272234 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-272234 exec deployment/netcat -- nslookup kubernetes.default: (10.184403876s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-272234 "pgrep -a kubelet"
I1204 21:03:28.758565   17743 config.go:182] Loaded profile config "custom-flannel-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4rvv" [1069bb01-eb95-4a1d-846c-65ae959e9479] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b4rvv" [1069bb01-eb95-4a1d-846c-65ae959e9479] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004608165s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m3.625509132s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.09154933s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vpmlp" [6471ca83-ba6b-4cf7-9234-19fd920429f4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005011443s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-272234 "pgrep -a kubelet"
I1204 21:04:44.452740   17743 config.go:182] Loaded profile config "kindnet-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xmb8k" [cfa6b3e3-b022-465e-9744-bf370e57ea37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xmb8k" [cfa6b3e3-b022-465e-9744-bf370e57ea37] Running
E1204 21:04:52.902352   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004630817s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.745052093s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.074165845s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8mmfl" [f62af01b-959f-4a79-8451-eb59242f8d3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004634431s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-272234 "pgrep -a kubelet"
I1204 21:05:21.443622   17743 config.go:182] Loaded profile config "flannel-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-272234 replace --force -f testdata/netcat-deployment.yaml
I1204 21:05:22.341095   17743 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5jw8t" [b716702f-361f-484b-a150-ed70fd368bc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5jw8t" [b716702f-361f-484b-a150-ed70fd368bc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004385843s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-272234 "pgrep -a kubelet"
I1204 21:05:46.771551   17743 config.go:182] Loaded profile config "enable-default-cni-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tn6h7" [dfb5dd43-25d0-479d-a5fa-488a039fbf13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tn6h7" [dfb5dd43-25d0-479d-a5fa-488a039fbf13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004585011s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-272234 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (55.564271098s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vlvg4" [7c499321-ba52-4b17-92c4-477fa66441f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006267557s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-272234 "pgrep -a kubelet"
I1204 21:06:37.449605   17743 config.go:182] Loaded profile config "calico-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2b5mj" [4fbc1373-c4d0-4868-8fac-fd6f4b2cb8a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2b5mj" [4fbc1373-c4d0-4868-8fac-fd6f4b2cb8a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004897039s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-272234 "pgrep -a kubelet"
I1204 21:06:46.972326   17743 config.go:182] Loaded profile config "bridge-272234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-272234 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zpjmn" [012737d9-fde0-413b-bc07-10202a91b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zpjmn" [012737d9-fde0-413b-bc07-10202a91b7f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.004585701s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-272234 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-272234 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16355257s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 21:07:16.402466   17743 retry.go:31] will retry after 734.955362ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-272234 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m44.400231818s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-272234 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-566991 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:07:40.754093   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:40.760518   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:40.771868   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:40.793245   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:40.834594   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:40.916213   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:41.077765   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:41.399596   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:42.041533   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:43.322882   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:45.884934   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:07:51.006896   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:01.248680   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:21.730632   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.011538   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.017980   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.029353   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.050862   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.092368   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.173780   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.335895   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:29.657949   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:30.300029   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:31.582255   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:34.144206   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:08:39.265553   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-566991 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m27.013621423s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-534766 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [766580e0-e437-4441-a818-c4a10a31167c] Pending
helpers_test.go:344: "busybox" [766580e0-e437-4441-a818-c4a10a31167c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [766580e0-e437-4441-a818-c4a10a31167c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005527589s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-534766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-439360 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-439360 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m25.961531587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-566991 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a3be8d42-19bc-4bfc-be9a-bf74020438e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a3be8d42-19bc-4bfc-be9a-bf74020438e1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004431298s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-566991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016348664s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-534766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-566991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1204 21:09:09.989116   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-566991 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9dbde4a5-4604-4b1d-9045-924aac054dda] Pending
E1204 21:10:20.356060   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9dbde4a5-4604-4b1d-9045-924aac054dda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9dbde4a5-4604-4b1d-9045-924aac054dda] Running
E1204 21:10:24.614207   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:10:25.477938   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00400995s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-439360 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-439360 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (687.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:11:32.532418   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534766 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m26.802602094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534766 -n no-preload-534766
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (687.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (569.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-566991 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:11:47.225550   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.231972   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.243394   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.264753   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.306919   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.388353   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.549887   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:47.871232   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:48.513304   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:49.795479   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:51.739512   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:52.357567   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:11:57.479768   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:12:07.721566   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:12:08.962408   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:12:12.221580   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-566991 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m28.779128538s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-566991 -n embed-certs-566991
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (569.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-082859 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-082859 --alsologtostderr -v=3: (5.463662484s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-082859 -n old-k8s-version-082859: exit status 7 (68.710731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-082859 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-439360 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:13:08.455573   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:13:09.165860   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:13:29.011269   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:13:30.884115   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:13:56.714401   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:14:15.105297   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:14:31.087506   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:14:38.215839   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:14:52.903214   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:15:05.919576   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:15:15.225731   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:15:42.926985   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:15:47.025105   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:16:14.725605   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:16:31.244352   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:16:47.225306   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:16:58.946931   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:17:14.929551   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:17:26.275859   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:17:40.753868   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/auto-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:18:29.010753   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/custom-flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:18:49.348039   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:19:38.216825   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/kindnet-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:19:52.902439   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/addons-153447/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:20:15.224453   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/flannel-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:20:47.024846   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/enable-default-cni-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-439360 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m25.95830334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-439360 -n default-k8s-diff-port-439360
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:36:31.243990   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/calico-272234/client.crt: no such file or directory" logger="UnhandledError"
E1204 21:36:47.225124   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/bridge-272234/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (48.642900653s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.217103933s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-594114 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-594114 --alsologtostderr -v=3: (11.304130532s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594114 -n newest-cni-594114
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594114 -n newest-cni-594114: exit status 7 (67.029729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-594114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (71.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1204 21:37:26.275257   17743 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/functional-763517/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m10.996118587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594114 -n newest-cni-594114
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (71.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-594114 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-594114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594114 -n newest-cni-594114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594114 -n newest-cni-594114: exit status 2 (233.522493ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594114 -n newest-cni-594114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594114 -n newest-cni-594114: exit status 2 (227.02236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-594114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594114 -n newest-cni-594114
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594114 -n newest-cni-594114
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
266 TestNetworkPlugins/group/kubenet 3
274 TestNetworkPlugins/group/cilium 3.32
280 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-153447 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-272234 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.161:8443
name: cert-expiration-994058
contexts:
- context:
cluster: cert-expiration-994058
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-994058
name: cert-expiration-994058
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-994058
user:
client-certificate: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.crt
client-key: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-272234

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272234"

                                                
                                                
----------------------- debugLogs end: kubenet-272234 [took: 2.859446965s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-272234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-272234
--- SKIP: TestNetworkPlugins/group/kubenet (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-272234 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-272234" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19985-10581/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.161:8443
name: cert-expiration-994058
contexts:
- context:
cluster: cert-expiration-994058
extensions:
- extension:
last-update: Wed, 04 Dec 2024 20:57:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-994058
name: cert-expiration-994058
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-994058
user:
client-certificate: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.crt
client-key: /home/jenkins/minikube-integration/19985-10581/.minikube/profiles/cert-expiration-994058/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-272234

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-272234" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272234"

                                                
                                                
----------------------- debugLogs end: cilium-272234 [took: 3.158138988s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-272234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-272234
--- SKIP: TestNetworkPlugins/group/cilium (3.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-455559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-455559
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard